summaryrefslogtreecommitdiff
path: root/doc/user/project/clusters
diff options
context:
space:
mode:
Diffstat (limited to 'doc/user/project/clusters')
-rw-r--r--doc/user/project/clusters/add_eks_clusters.md29
-rw-r--r--doc/user/project/clusters/add_gke_clusters.md12
-rw-r--r--doc/user/project/clusters/eks_and_gitlab/index.md3
-rw-r--r--doc/user/project/clusters/index.md66
-rw-r--r--doc/user/project/clusters/runbooks/index.md8
-rw-r--r--doc/user/project/clusters/serverless/aws.md26
-rw-r--r--doc/user/project/clusters/serverless/index.md50
7 files changed, 98 insertions, 96 deletions
diff --git a/doc/user/project/clusters/add_eks_clusters.md b/doc/user/project/clusters/add_eks_clusters.md
index c3f2b96ce9f..1ba70c96dcd 100644
--- a/doc/user/project/clusters/add_eks_clusters.md
+++ b/doc/user/project/clusters/add_eks_clusters.md
@@ -23,9 +23,9 @@ requirements are met:
### Additional requirements for self-managed instances **(CORE ONLY)**
If you are using a self-managed GitLab instance, GitLab must first be configured with a set of
-Amazon credentials. These credentials will be used to assume an Amazon IAM role provided by the user
+Amazon credentials. These credentials are used to assume an Amazon IAM role provided by the user
creating the cluster. Create an IAM user and ensure it has permissions to assume the role(s) that
-your users will use to create EKS clusters.
+your users need to create EKS clusters.
For example, the following policy document allows assuming a role whose name starts with
`gitlab-eks-` in account `123456789012`:
@@ -60,7 +60,7 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
- Group's **Kubernetes** page, for a group-level cluster.
- **Admin Area > Kubernetes**, for an instance-level cluster.
1. Click **Add Kubernetes cluster**.
-1. Under the **Create new cluster** tab, click **Amazon EKS**. You will be provided with an
+1. Under the **Create new cluster** tab, click **Amazon EKS** to display an
`Account ID` and `External ID` needed for later steps.
1. In the [IAM Management Console](https://console.aws.amazon.com/iam/home), create an IAM policy:
1. From the left panel, select **Policies**.
@@ -137,9 +137,9 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
1. Click **Next: Tags**, and optionally enter any tags you wish to associate with this role.
1. Click **Next: Review**.
1. Enter a role name and optional description into the fields provided.
- 1. Click **Create role**, the new role name will appear at the top. Click on its name and copy the `Role ARN` from the newly created role.
+ 1. Click **Create role**, the new role name displays at the top. Click on its name and copy the `Role ARN` from the newly created role.
1. In GitLab, enter the copied role ARN into the `Role ARN` field.
-1. In the **Cluster Region** field, enter the [region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) you plan to use for your new cluster. GitLab will authenticate you have access to this region when authenticating your role.
+1. In the **Cluster Region** field, enter the [region](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) you plan to use for your new cluster. GitLab confirms you have access to this region when authenticating your role.
1. Click **Authenticate with AWS**.
1. Choose your cluster's settings:
- **Kubernetes cluster name** - The name you wish to give the cluster.
@@ -158,7 +158,7 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
- **VPC** - Select a [VPC](https://docs.aws.amazon.com/vpc/latest/userguide/what-is-amazon-vpc.html)
to use for your EKS Cluster resources.
- **Subnets** - Choose the [subnets](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_Subnets.html)
- in your VPC where your worker nodes will run. You must select at least two.
+ in your VPC where your worker nodes run. You must select at least two.
- **Security group** - Choose the [security group](https://docs.aws.amazon.com/vpc/latest/userguide/VPC_SecurityGroups.html)
to apply to the EKS-managed Elastic Network Interfaces that are created in your worker node subnets.
- **Instance type** - The [instance type](https://aws.amazon.com/ec2/instance-types/) of your worker nodes.
@@ -167,11 +167,11 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
1. Finally, click the **Create Kubernetes cluster** button.
-After about 10 minutes, your cluster will be ready to go. You can now proceed
+After about 10 minutes, your cluster is ready to go. You can now proceed
to install some [pre-defined applications](index.md#installing-applications).
NOTE: **Note:**
-You will need to add your AWS external ID to the
+You must add your AWS external ID to the
[IAM Role in the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-role.html#cli-configure-role-xaccount)
to manage your cluster using `kubectl`.
@@ -219,9 +219,9 @@ For information on adding an existing EKS cluster, see
### Create a default Storage Class
Amazon EKS doesn't have a default Storage Class out of the box, which means
-requests for persistent volumes will not be automatically fulfilled. As part
+requests for persistent volumes are not automatically fulfilled. As part
of Auto DevOps, the deployed PostgreSQL instance requests persistent storage,
-and without a default storage class it will fail to start.
+and without a default storage class it cannot start.
If a default Storage Class doesn't already exist and is desired, follow Amazon's
[guide on storage classes](https://docs.aws.amazon.com/eks/latest/userguide/storage-classes.html)
@@ -239,18 +239,17 @@ to build, test, and deploy the app.
[Enable Auto DevOps](../../../topics/autodevops/index.md#at-the-project-level)
if not already enabled. If a wildcard DNS entry was created resolving to the
Load Balancer, enter it in the `domain` field under the Auto DevOps settings.
-Otherwise, the deployed app will not be externally available outside of the cluster.
+Otherwise, the deployed app isn't externally available outside of the cluster.
![Deploy Pipeline](img/pipeline.png)
-A new pipeline will automatically be created, which will begin to build, test,
-and deploy the app.
+GitLab creates a new pipeline, which begins to build, test, and deploy the app.
-After the pipeline has finished, your app will be running in EKS and available
+After the pipeline has finished, your app runs in EKS, and is available
to users. Click on **CI/CD > Environments**.
![Deployed Environment](img/environment.png)
-You will see a list of the environments and their deploy status, as well as
+GitLab displays a list of the environments and their deploy status, as well as
options to browse to the app, view monitoring metrics, and even access a shell
on the running pod.
diff --git a/doc/user/project/clusters/add_gke_clusters.md b/doc/user/project/clusters/add_gke_clusters.md
index 720f9bdf253..c4bfddb092a 100644
--- a/doc/user/project/clusters/add_gke_clusters.md
+++ b/doc/user/project/clusters/add_gke_clusters.md
@@ -33,7 +33,7 @@ Note the following:
created by GitLab are RBAC-enabled. Take a look at the [RBAC section](add_remove_clusters.md#rbac-cluster-resources) for
more information.
- Starting from [GitLab 12.5](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/18341), the
- cluster's pod address IP range will be set to /16 instead of the regular /14. /16 is a CIDR
+ cluster's pod address IP range is set to `/16` instead of the regular `/14`. `/16` is a CIDR
notation.
- GitLab requires basic authentication enabled and a client certificate issued for the cluster to
set up an [initial service account](add_remove_clusters.md#access-controls). In [GitLab versions
@@ -57,20 +57,20 @@ To create and add a new Kubernetes cluster to your project, group, or instance:
- **Kubernetes cluster name** - The name you wish to give the cluster.
- **Environment scope** - The [associated environment](index.md#setting-the-environment-scope) to this cluster.
- **Google Cloud Platform project** - Choose the project you created in your GCP
- console that will host the Kubernetes cluster. Learn more about
+ console to host the Kubernetes cluster. Learn more about
[Google Cloud Platform projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects).
- **Zone** - Choose the [region zone](https://cloud.google.com/compute/docs/regions-zones/)
- under which the cluster will be created.
+ under which to create the cluster.
- **Number of nodes** - Enter the number of nodes you wish the cluster to have.
- **Machine type** - The [machine type](https://cloud.google.com/compute/docs/machine-types)
- of the Virtual Machine instance that the cluster will be based on.
+ of the Virtual Machine instance to base the cluster on.
- **Enable Cloud Run for Anthos** - Check this if you want to use Cloud Run for Anthos for this cluster.
See the [Cloud Run for Anthos section](#cloud-run-for-anthos) for more information.
- **GitLab-managed cluster** - Leave this checked if you want GitLab to manage namespaces and service accounts for this cluster.
See the [Managed clusters section](index.md#gitlab-managed-clusters) for more information.
1. Finally, click the **Create Kubernetes cluster** button.
-After a couple of minutes, your cluster will be ready to go. You can now proceed
+After a couple of minutes, your cluster is ready. You can now proceed
to install some [pre-defined applications](index.md#installing-applications).
### Cloud Run for Anthos
@@ -79,7 +79,7 @@ to install some [pre-defined applications](index.md#installing-applications).
You can choose to use Cloud Run for Anthos in place of installing Knative and Istio
separately after the cluster has been created. This means that Cloud Run
-(Knative), Istio, and HTTP Load Balancing will be enabled on the cluster at
+(Knative), Istio, and HTTP Load Balancing are enabled on the cluster at
create time and cannot be [installed or uninstalled](../../clusters/applications.md) separately.
## Existing GKE cluster
diff --git a/doc/user/project/clusters/eks_and_gitlab/index.md b/doc/user/project/clusters/eks_and_gitlab/index.md
index 895b51ea9bb..e38fbb871c1 100644
--- a/doc/user/project/clusters/eks_and_gitlab/index.md
+++ b/doc/user/project/clusters/eks_and_gitlab/index.md
@@ -3,3 +3,6 @@ redirect_to: '../add_eks_clusters.md#existing-eks-cluster'
---
This document was moved to [another location](../add_eks_clusters.md#existing-eks-cluster).
+
+<!-- This redirect file can be deleted after February 1, 2021. -->
+<!-- Before deletion, see: https://docs.gitlab.com/ee/development/documentation/#move-or-rename-a-page -->
diff --git a/doc/user/project/clusters/index.md b/doc/user/project/clusters/index.md
index 9273fb7b361..249abe9140c 100644
--- a/doc/user/project/clusters/index.md
+++ b/doc/user/project/clusters/index.md
@@ -79,8 +79,8 @@ project. That way you can have different clusters for different environments,
like dev, staging, production, and so on.
Simply add another cluster, like you did the first time, and make sure to
-[set an environment scope](#setting-the-environment-scope) that will
-differentiate the new cluster with the rest.
+[set an environment scope](#setting-the-environment-scope) that
+differentiates the new cluster from the rest.
#### Setting the environment scope
@@ -89,9 +89,9 @@ them with an environment scope. The environment scope associates clusters with [
[environment-specific variables](../../../ci/variables/README.md#limit-the-environment-scopes-of-environment-variables) work.
The default environment scope is `*`, which means all jobs, regardless of their
-environment, will use that cluster. Each scope can only be used by a single
-cluster in a project, and a validation error will occur if otherwise.
-Also, jobs that don't have an environment keyword set will not be able to access any cluster.
+environment, use that cluster. Each scope can be used only by a single cluster
+in a project, and a validation error occurs if otherwise. Also, jobs that don't
+have an environment keyword set can't access any cluster.
For example, let's say the following Kubernetes clusters exist in a project:
@@ -127,13 +127,13 @@ deploy to production:
url: https://example.com/
```
-The result will then be:
+The results:
-- The Development cluster details will be available in the `deploy to staging`
+- The Development cluster details are available in the `deploy to staging`
job.
-- The production cluster details will be available in the `deploy to production`
+- The production cluster details are available in the `deploy to production`
job.
-- No cluster details will be available in the `test` job because it doesn't
+- No cluster details are available in the `test` job because it doesn't
define any environment.
## Configuring your Kubernetes cluster
@@ -157,15 +157,15 @@ applications running on the cluster.
> - [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/22011) in GitLab 11.5.
> - Became [optional](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/26565) in GitLab 11.11.
-You can choose to allow GitLab to manage your cluster for you. If your cluster is
-managed by GitLab, resources for your projects will be automatically created. See the
-[Access controls](add_remove_clusters.md#access-controls) section for details on which resources will
-be created.
+You can choose to allow GitLab to manage your cluster for you. If your cluster
+is managed by GitLab, resources for your projects are automatically created. See
+the [Access controls](add_remove_clusters.md#access-controls) section for
+details about the created resources.
-If you choose to manage your own cluster, project-specific resources will not be created
-automatically. If you are using [Auto DevOps](../../../topics/autodevops/index.md), you will
-need to explicitly provide the `KUBE_NAMESPACE` [deployment variable](#deployment-variables)
-that will be used by your deployment jobs, otherwise a namespace will be created for you.
+If you choose to manage your own cluster, project-specific resources aren't created
+automatically. If you are using [Auto DevOps](../../../topics/autodevops/index.md), you must
+explicitly provide the `KUBE_NAMESPACE` [deployment variable](#deployment-variables)
+for your deployment jobs to use; otherwise a namespace is created for you.
#### Important notes
@@ -198,10 +198,10 @@ To clear the cache:
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/24580) in GitLab 11.8.
You do not need to specify a base domain on cluster settings when using GitLab Serverless. The domain in that case
-will be specified as part of the Knative installation. See [Installing Applications](#installing-applications).
+is specified as part of the Knative installation. See [Installing Applications](#installing-applications).
-Specifying a base domain will automatically set `KUBE_INGRESS_BASE_DOMAIN` as an environment variable.
-If you are using [Auto DevOps](../../../topics/autodevops/index.md), this domain will be used for the different
+Specifying a base domain automatically sets `KUBE_INGRESS_BASE_DOMAIN` as an environment variable.
+If you are using [Auto DevOps](../../../topics/autodevops/index.md), this domain is used for the different
stages. For example, Auto Review Apps and Auto Deploy.
The domain should have a wildcard DNS configured to the Ingress IP address. After Ingress has been installed (see [Installing Applications](#installing-applications)),
@@ -224,7 +224,7 @@ Auto DevOps automatically detects, builds, tests, deploys, and monitors your
applications.
To make full use of Auto DevOps (Auto Deploy, Auto Review Apps, and
-Auto Monitoring) you will need the Kubernetes project integration enabled, but
+Auto Monitoring) the Kubernetes project integration must be enabled, but
Kubernetes clusters can be used without Auto DevOps.
[Read more about Auto DevOps](../../../topics/autodevops/index.md)
@@ -238,7 +238,7 @@ A Kubernetes cluster can be the destination for a deployment job. If
and configuration is not required. You can immediately begin interacting with
the cluster from your jobs using tools such as `kubectl` or `helm`.
- You don't use GitLab's cluster integration you can still deploy to your
- cluster. However, you will need configure Kubernetes tools yourself
+ cluster. However, you must configure Kubernetes tools yourself
using [environment variables](../../../ci/variables/README.md#custom-environment-variables)
before you can interact with the cluster from your jobs.
@@ -257,14 +257,14 @@ The Kubernetes cluster integration exposes the following
GitLab CI/CD build environment to deployment jobs, which are jobs that have
[defined a target environment](../../../ci/environments/index.md#defining-environments).
-| Variable | Description |
-| -------- | ----------- |
-| `KUBE_URL` | Equal to the API URL. |
-| `KUBE_TOKEN` | The Kubernetes token of the [environment service account](add_remove_clusters.md#access-controls). Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main service account of the cluster integration. |
-| `KUBE_NAMESPACE` | The namespace associated with the project's deployment service account. In the format `<project_name>-<project_id>-<environment>`. For GitLab-managed clusters, a matching namespace is automatically created by GitLab in the cluster. If your cluster was created before GitLab 12.2, the default `KUBE_NAMESPACE` is set to `<project_name>-<project_id>`. |
-| `KUBE_CA_PEM_FILE` | Path to a file containing PEM data. Only present if a custom CA bundle was specified. |
-| `KUBE_CA_PEM` | (**deprecated**) Raw PEM data. Only if a custom CA bundle was specified. |
-| `KUBECONFIG` | Path to a file containing `kubeconfig` for this deployment. CA bundle would be embedded if specified. This configuration also embeds the same token defined in `KUBE_TOKEN` so you likely will only need this variable. This variable name is also automatically picked up by `kubectl` so you won't actually need to reference it explicitly if using `kubectl`. |
+| Variable | Description |
+|----------------------------|-------------|
+| `KUBE_URL` | Equal to the API URL. |
+| `KUBE_TOKEN` | The Kubernetes token of the [environment service account](add_remove_clusters.md#access-controls). Prior to GitLab 11.5, `KUBE_TOKEN` was the Kubernetes token of the main service account of the cluster integration. |
+| `KUBE_NAMESPACE` | The namespace associated with the project's deployment service account. In the format `<project_name>-<project_id>-<environment>`. For GitLab-managed clusters, a matching namespace is automatically created by GitLab in the cluster. If your cluster was created before GitLab 12.2, the default `KUBE_NAMESPACE` is set to `<project_name>-<project_id>`. |
+| `KUBE_CA_PEM_FILE` | Path to a file containing PEM data. Only present if a custom CA bundle was specified. |
+| `KUBE_CA_PEM` | (**deprecated**) Raw PEM data. Only if a custom CA bundle was specified. |
+| `KUBECONFIG` | Path to a file containing `kubeconfig` for this deployment. CA bundle would be embedded if specified. This configuration also embeds the same token defined in `KUBE_TOKEN` so you likely need only this variable. This variable name is also automatically picked up by `kubectl` so you don't need to reference it explicitly if using `kubectl`. |
| `KUBE_INGRESS_BASE_DOMAIN` | From GitLab 11.8, this variable can be used to set a domain per cluster. See [cluster domains](#base-domain) for more information. |
### Custom namespace
@@ -362,7 +362,7 @@ the deployment job:
- A namespace.
- A service account.
-However, sometimes GitLab can not create them. In such instances, your job will fail with the message:
+However, sometimes GitLab can not create them. In such instances, your job can fail with the message:
```plaintext
This job failed because the necessary resources were not successfully created.
@@ -376,7 +376,7 @@ Reasons for failure include:
privileges required by GitLab.
- Missing `KUBECONFIG` or `KUBE_TOKEN` variables. To be passed to your job, they must have a matching
[`environment:name`](../../../ci/environments/index.md#defining-environments). If your job has no
- `environment:name` set, it will not be passed the Kubernetes credentials.
+ `environment:name` set, the Kubernetes credentials are not passed to it.
NOTE: **Note:**
Project-level clusters upgraded from GitLab 12.0 or older may be configured
@@ -396,6 +396,6 @@ Automatically detect and monitor Kubernetes metrics. Automatic monitoring of
> - [Introduced](https://gitlab.com/gitlab-org/gitlab/-/merge_requests/4701) in [GitLab Ultimate](https://about.gitlab.com/pricing/) 10.6.
> - [Moved](https://gitlab.com/gitlab-org/gitlab/-/issues/208224) to GitLab Core in 13.2.
-When [Prometheus is deployed](#installing-applications), GitLab will automatically monitor the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.
+When [Prometheus is deployed](#installing-applications), GitLab monitors the cluster's health. At the top of the cluster settings page, CPU and Memory utilization is displayed, along with the total amount available. Keeping an eye on cluster resources can be important, if the cluster runs out of memory pods may be shutdown or fail to start.
![Cluster Monitoring](img/k8s_cluster_monitoring.png)
diff --git a/doc/user/project/clusters/runbooks/index.md b/doc/user/project/clusters/runbooks/index.md
index c1e4e821efd..7dfca5314da 100644
--- a/doc/user/project/clusters/runbooks/index.md
+++ b/doc/user/project/clusters/runbooks/index.md
@@ -37,7 +37,7 @@ for an overview of how this is accomplished in GitLab!
## Requirements
-To create an executable runbook, you will need:
+To create an executable runbook, you need:
- **Kubernetes** - A Kubernetes cluster is required to deploy the rest of the
applications. The simplest way to get started is to add a cluster using one
@@ -71,7 +71,7 @@ the components outlined above and the pre-loaded demo runbook.
![install ingress](img/ingress-install.png)
1. After Ingress has been installed successfully, click the **Install** button next
- to the **JupyterHub** application. You will need the **Jupyter Hostname** provided
+ to the **JupyterHub** application. You need the **Jupyter Hostname** provided
here in the next step.
![install JupyterHub](img/jupyterhub-install.png)
@@ -84,8 +84,8 @@ the components outlined above and the pre-loaded demo runbook.
![authorize Jupyter](img/authorize-jupyter.png)
-1. Click **Authorize**, and you will be redirected to the JupyterHub application.
-1. Click **Start My Server**, and the server will start in a few seconds.
+1. Click **Authorize**, and GitLab redirects you to the JupyterHub application.
+1. Click **Start My Server** to start the server in a few seconds.
1. To configure the runbook's access to your GitLab project, you must enter your
[GitLab Access Token](../../../profile/personal_access_tokens.md)
and your Project ID in the **Setup** section of the demo runbook:
diff --git a/doc/user/project/clusters/serverless/aws.md b/doc/user/project/clusters/serverless/aws.md
index 0de0fd38336..6dce37877c6 100644
--- a/doc/user/project/clusters/serverless/aws.md
+++ b/doc/user/project/clusters/serverless/aws.md
@@ -29,7 +29,7 @@ Alternatively, you can quickly [create a new project with a template](../../../.
### Example
-In the following example, you will:
+This example shows you how to:
1. Create a basic AWS Lambda Node.js function.
1. Link the function to an API Gateway `GET` endpoint.
@@ -49,7 +49,7 @@ Lets take it step by step.
#### Creating a Lambda handler function
-Your Lambda function will be the primary handler of requests. In this case we will create a very simple Node.js `hello` function:
+Your Lambda function is the primary handler of requests. In this case, create a very simple Node.js `hello` function:
```javascript
'use strict';
@@ -72,13 +72,13 @@ Place this code in the file `src/handler.js`.
`src` is the standard location for serverless functions, but is customizable should you desire that.
-In our case, `module.exports.hello` defines the `hello` handler that will be referenced later in the `serverless.yml`
+In our case, `module.exports.hello` defines the `hello` handler to reference later in the `serverless.yml`.
You can learn more about the AWS Lambda Node.js function handler and all its various options here: <https://docs.aws.amazon.com/lambda/latest/dg/nodejs-prog-model-handler.html>
#### Creating a `serverless.yml` file
-In the root of your project, create a `serverless.yml` file that will contain configuration specifics for the Serverless Framework.
+In the root of your project, create a `serverless.yml` file containing configuration specifics for the Serverless Framework.
Put the following code in the file:
@@ -97,9 +97,9 @@ functions:
Our function contains a handler and a event.
-The handler definition will provision the Lambda function using the source code located `src/handler.hello`.
+The handler definition provisions the Lambda function using the source code located `src/handler.hello`.
-The `events` declaration will create a AWS API Gateway `GET` endpoint to receive external requests and hand them over to the Lambda function via a service integration.
+The `events` declaration creates an AWS API Gateway `GET` endpoint to receive external requests and hand them over to the Lambda function via a service integration.
You can read more about the [available properties and additional configuration possibilities](https://www.serverless.com/framework/docs/providers/aws/guide/serverless.yml/) of the Serverless Framework.
@@ -141,10 +141,10 @@ For more information please see [Create a custom variable in the UI](../../../..
#### Deploying your function
-`git push` the changes to your GitLab repository and the GitLab build pipeline will automatically deploy your function.
+`git push` the changes to your GitLab repository and the GitLab build pipeline deploys your function.
-In your GitLab deploy stage log, there will be output containing your AWS Lambda endpoint URL.
-The log line will look similar to this:
+Your GitLab deploy stage log contains output containing your AWS Lambda endpoint URL,
+with log lines similar to this:
```plaintext
endpoints:
@@ -227,7 +227,7 @@ provider:
```
From there, you can reference them in your functions as well.
-Remember to add `A_VARIABLE` to your GitLab CI/CD variables under **Settings > CI/CD > Variables**, and it will get picked up and deployed with your function.
+Remember to add `A_VARIABLE` to your GitLab CI/CD variables under **Settings > CI/CD > Variables** to be picked up and deployed with your function.
NOTE: **Note:**
Anyone with access to the AWS environment may be able to see the values of those
@@ -309,7 +309,7 @@ GitLab allows developers to build and deploy serverless applications using the c
### Example
-In the following example, you will:
+This example shows you how to:
- Install SAM CLI.
- Create a sample SAM application including a Lambda function and API Gateway.
@@ -414,8 +414,8 @@ Let’s examine the configuration file more closely:
### Deploying your application
-Push changes to your GitLab repository and the GitLab build pipeline will automatically
-deploy your application. If your:
+Push changes to your GitLab repository and the GitLab build pipeline
+deploys your application. If your:
- Build and deploy are successful, [test your deployed application](#testing-the-deployed-application).
- Build fails, look at the build log to see why the build failed. Some common reasons
diff --git a/doc/user/project/clusters/serverless/index.md b/doc/user/project/clusters/serverless/index.md
index 603c4bd73b1..6625a6b314f 100644
--- a/doc/user/project/clusters/serverless/index.md
+++ b/doc/user/project/clusters/serverless/index.md
@@ -39,9 +39,9 @@ With GitLab Serverless, you can deploy both functions-as-a-service (FaaS) and se
## Prerequisites
-To run Knative on GitLab, you will need:
+To run Knative on GitLab, you need:
-1. **Existing GitLab project:** You will need a GitLab project to associate all resources. The simplest way to get started:
+1. **Existing GitLab project:** You need a GitLab project to associate all resources. The simplest way to get started:
- If you are planning on [deploying functions](#deploying-functions),
clone the [functions example project](https://gitlab.com/knative-examples/functions) to get
started.
@@ -51,19 +51,19 @@ To run Knative on GitLab, you will need:
1. **Kubernetes Cluster:** An RBAC-enabled Kubernetes cluster is required to deploy Knative.
The simplest way to get started is to add a cluster using GitLab's [GKE integration](../add_remove_clusters.md).
The set of minimum recommended cluster specifications to run Knative is 3 nodes, 6 vCPUs, and 22.50 GB memory.
-1. **GitLab Runner:** A runner is required to run the CI jobs that will deploy serverless
+1. **GitLab Runner:** A runner is required to run the CI jobs that deploy serverless
applications or functions onto your cluster. You can install GitLab Runner
onto the existing Kubernetes cluster. See [Installing Applications](../index.md#installing-applications) for more information.
-1. **Domain Name:** Knative will provide its own load balancer using Istio. It will provide an
- external IP address or hostname for all the applications served by Knative. You will be prompted to enter a
- wildcard domain where your applications will be served. Configure your DNS server to use the
+1. **Domain Name:** Knative provides its own load balancer using Istio, and an
+ external IP address or hostname for all the applications served by Knative. Enter a
+ wildcard domain to serve your applications. Configure your DNS server to use the
external IP address or hostname for that domain.
1. **`.gitlab-ci.yml`:** GitLab uses [Kaniko](https://github.com/GoogleContainerTools/kaniko)
to build the application. We also use [GitLab Knative tool](https://gitlab.com/gitlab-org/gitlabktl)
CLI to simplify the deployment of services and functions to Knative.
1. **`serverless.yml`** (for [functions only](#deploying-functions)): When using serverless to deploy functions, the `serverless.yml` file
- will contain the information for all the functions being hosted in the repository as well as a reference to the
- runtime being used.
+ contains the information for all the functions being hosted in the repository as well as a reference
+ to the runtime being used.
1. **`Dockerfile`** (for [applications only](#deploying-serverless-applications)): Knative requires a
`Dockerfile` in order to build your applications. It should be included at the root of your
project's repository and expose port `8080`. `Dockerfile` is not require if you plan to build serverless functions
@@ -92,7 +92,7 @@ memory. **RBAC must be enabled.**
For clusters created on GKE, see [GKE Cluster Access](https://cloud.google.com/kubernetes-engine/docs/how-to/cluster-access-for-kubectl),
for other platforms [Install kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
-1. The Ingress is now available at this address and will route incoming requests to the proper service based on the DNS
+1. The Ingress is now available at this address and routes incoming requests to the proper service based on the DNS
name in the request. To support this, a wildcard DNS record should be created for the desired domain name. For example,
if your Knative base domain is `knative.info` then you need to create an A record or CNAME record with domain `*.knative.info`
pointing the IP address or hostname of the Ingress.
@@ -107,7 +107,7 @@ on a given project, but not both. The current implementation makes use of a
> [Introduced](https://gitlab.com/gitlab-org/gitlab-foss/-/issues/58941) in GitLab 12.0.
-The _invocations_ monitoring feature of GitLab serverless won't work when
+The _invocations_ monitoring feature of GitLab serverless is unavailable when
adding an existing installation of Knative.
It's also possible to use GitLab Serverless with an existing Kubernetes cluster
@@ -121,7 +121,7 @@ which already has Knative installed. You must do the following:
- For a non-GitLab managed cluster, ensure that the service account for the token
provided can manage resources in the `serving.knative.dev` API group.
- For a GitLab managed cluster, if you added the cluster in [GitLab 12.1 or later](https://gitlab.com/gitlab-org/gitlab-foss/-/merge_requests/30235),
- then GitLab will already have the required access and you can proceed to the next step.
+ then GitLab already has the required access and you can proceed to the next step.
Otherwise, you need to manually grant GitLab's service account the ability to manage
resources in the `serving.knative.dev` API group. Since every GitLab service account
@@ -234,12 +234,12 @@ Follow these steps to deploy a function using the Node.js runtime to your
Knative instance (you can skip these steps if you've cloned the example
project):
-1. Create a directory that will house the function. In this example we will
+1. Create a directory to house the function. In this example we will
create a directory called `echo` at the root of the project.
-1. Create the file that will contain the function code. In this example, our file is called `echo.js` and is located inside the `echo` directory. If your project is:
+1. Create the file to contain the function code. In this example, our file is called `echo.js` and is located inside the `echo` directory. If your project is:
- Public, continue to the next step.
- - Private, you will need to [create a GitLab deploy token](../../deploy_tokens/index.md#creating-a-deploy-token) with `gitlab-deploy-token` as the name and the `read_registry` scope.
+ - Private, you must [create a GitLab deploy token](../../deploy_tokens/index.md#creating-a-deploy-token) with `gitlab-deploy-token` as the name and the `read_registry` scope.
1. `.gitlab-ci.yml`: this defines a pipeline used to deploy your functions.
It must be included at the root of your repository:
@@ -304,7 +304,7 @@ Explanation of the fields used above:
| Parameter | Description |
|-----------|-------------|
-| `service` | Name for the Knative service which will serve the function. |
+| `service` | Name for the Knative service which serves the function. |
| `description` | A short description of the `service`. |
### `provider`
@@ -349,9 +349,9 @@ The optional `runtime` parameter can refer to one of the following runtime alias
| `openfaas/classic/ruby` | OpenFaaS |
After the `gitlab-ci.yml` template has been added and the `serverless.yml` file
-has been created, pushing a commit to your project will result in a CI pipeline
-being executed which will deploy each function as a Knative service. Once the
-deploy stage has finished, additional details for the function will appear
+has been created, pushing a commit to your project results in a CI pipeline
+being executed which deploys each function as a Knative service. After the
+deploy stage has finished, additional details for the function display
under **Operations > Serverless**.
![serverless page](img/serverless-page.png)
@@ -482,7 +482,7 @@ A `serverless.yml` file is not required when deploying serverless applications.
### Deploy the application with Knative
-With all the pieces in place, the next time a CI pipeline runs, the Knative application will be deployed. Navigate to
+With all the pieces in place, the next time a CI pipeline runs the Knative application deploys. Navigate to
**CI/CD > Pipelines** and click the most recent pipeline.
### Function details
@@ -498,13 +498,13 @@ rows to bring up the function details page.
![function_details](img/function-details-loaded.png)
-The pod count will give you the number of pods running the serverless function instances on a given cluster.
+The pod count gives you the number of pods running the serverless function instances on a given cluster.
For the Knative function invocations to appear,
[Prometheus must be installed](../index.md#installing-applications).
Once Prometheus is installed, a message may appear indicating that the metrics data _is
-loading or is not available at this time._ It will appear upon the first access of the
+loading or is not available at this time._ It appears upon the first access of the
page, but should go away after a few seconds. If the message does not disappear, then it
is possible that GitLab is unable to connect to the Prometheus instance running on the
cluster.
@@ -558,7 +558,7 @@ Or:
## Enabling TLS for Knative services
-By default, a GitLab serverless deployment will be served over `http`. To serve
+By default, a GitLab serverless deployment is served over `http`. To serve
over `https`, you must manually obtain and install TLS certificates.
12345678901234567890123456789012345678901234567890123456789012345678901234567890
@@ -647,7 +647,7 @@ or with other versions of Python.
```
1. Create certificate and private key files. Using the contents of the files
- returned by Certbot, we'll create two files in order to create the
+ returned by Certbot, create two files in order to create the
Kubernetes secret:
Run the following command to see the contents of `fullchain.pem`:
@@ -828,8 +828,8 @@ or with other versions of Python.
```
After your changes are running on your Knative cluster, you can begin using the HTTPS protocol for secure access your deployed Knative services.
- In the event a mistake is made during this process and you need to update the cert, you will need to edit the gateway `knative-ingress-gateway`
- to switch back to `PASSTHROUGH` mode. Once corrections are made, edit the file again so the gateway will use the new certificates.
+ In the event a mistake is made during this process and you need to update the cert, you must edit the gateway `knative-ingress-gateway`
+ to switch back to `PASSTHROUGH` mode. Once corrections are made, edit the file again so the gateway uses the new certificates.
## Using an older version of `gitlabktl`