einfra logoDocumentation
GitOps

GitLab Runner in Kubernetes

GitLab Runner is used to run jobs listed in .gitlab-ci.yaml in GitLab repositories. GitLab Runner differs from the GitLab Kubernetes agent, which deploys containers into Kubernetes clusters for e2e tests, CI/CD, and other purposes.

The following tutorial describes how to set up GitLab Runner in our Kubernetes cluster for your project or group. We created this tutorial with help from our colleagues Pavel Břoušek, Štěpán Řihák, and Adrián Rošinec.

Prerequisites

  • GitLab repository
  • Namespace where you want to run the runners
  • kubectl installed on your laptop
  • helm installed on your laptop
  • kubeseal for securely storing secrets

Getting the Namespace

Throughout these instructions, you need to replace <YOUR NAMESPACE> with the name of your personal or group namespace. You can check what namespaces are available to you via Rancher GUI.

Kubectl - Interaction with the Cluster

To interact with the cluster, you need to install the kubectl tool and download your kubeconfig file. Follow the instructions on how to obtain the file here.

Helm - Package Manager for Kubernetes

GitLab Runner is distributed as a Helm package. Helm is a package manager for Kubernetes, and to be able to use Helm packages, you need to install the helm command-line tool. See official instructions here.

Kubeseal - Storing Encrypted Secrets

To safely store GitLab tokens, use Sealed Secrets. Install kubeseal; instructions can be found on the official GitHub repository. More resources about kubeseal can be found in the Git secrets section in our documentation.

Prepare GitLab Token for usage in Kubernetes

  1. Get the token for either your GitLab project or GitLab group. After following the steps for a given runner type, you will see a page where Step 1 shows your runner token (prefixed glrt-).

  2. Create a sealed secret. First, get the base64 encoded runner token (echo <RUNNER TOKEN> | base64). Second, create a sealed secret YAML file. You can use the example below where you need to change <YOUR NAMESPACE>, <BASE64 ENCODED RUNNER TOKEN> and save the file as tokenSecret.yaml. Create the sealed secret by issuing the command kubeseal --controller-namespace sealed-secrets-operator <tokenSecret.yaml> sealed-secret.json.

apiVersion: v1
kind: Secret
metadata:
  name: gitlab-runner-secret
  namespace: <YOUR NAMESPACE>
type: Opaque
data:
  runner-registration-token: "" # leave as empty string for compatibility reasons!
  runner-token: <BASE64 ENCODED RUNNER TOKEN>
  1. Create a Secret in the cluster from the sealed secret by issuing the command kubectl apply -f sealed-secret.json -n <YOUR NAMESPACE>. When the runner is created, it will automatically get the runner token from this Secret.

Setup GitLab Runner

  1. Optional: Prepare storage for shared cache by creating a PVC. You can use the example below where you need to put the desired amount into <DESIRED-STORAGE-SIZE> and save the file as pvc.yaml. Then create the PVC in the cluster by issuing the command kubectl create -f pvc.yaml -n <YOUR NAMESPACE>.
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: gitlab-runner-cache
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: <DESIRED-STORAGE-SIZE>Gi
  storageClassName: nfs-csi

If you do not set up a shared cache, there will be no caching at all because each job runs in a new Pod that is destroyed after completion. To make cache work and speed up job execution, you need to set up either a distributed cache (e.g., CESNET S3) or a shared volume. Then modify values.yaml (step 3) accordingly.

  1. On your local machine, run helm repo add gitlab https://charts.gitlab.io to obtain the GitLab package from Helm. You can check available chart versions with helm search repo -l gitlab/gitlab-runner.

  2. Download values.yaml, which is used as a configuration file for the Helm package. This file has already been tailored to run in our cluster and should work out of the box. The official documentation explains the fields, and we recommend checking it out.

You should update the GitLab Runner version to the newest available or to the version matching the GitLab instance version. It might happen that an image is not available for the latest version. In that case, installation will run, but the GitLab Runner in Rancher will be in the state ImagePullBackOff, which means that the image does not exist. You may need to adjust values for CPU, memory, and concurrent jobs throughout the file to fit within your project’s quotas.

  1. Start your runner. Once you have configured values.yaml, you are ready to start your runner by issuing helm install -n <YOUR NAMESPACE> gitlab-runner -f values.yaml gitlab/gitlab-runner. You can check if the deployment has been successful by issuing the command kubectl get pods -n <YOUR NAMESPACE>, where there should be a Pod in a Running state (Ready 1/1).

If a Pod is in ImagePullBackOff state, it means that the image either has the wrong name or tag, is not pushed to the registry, or generally does not exist. The image is specified in values.yaml, section image:, so you can check the tag and verify on the official site that the tag exists. If not, choose another tag.

  1. If you created a group runner, you should see it as active in Build → Runners. If you created a project runner, you should see it in SettingsCI/CDProject Runners.

Upgrade runner

To update the runner to a new version:

  1. Pause the runner in GitLab and ensure any jobs have completed. Pausing the runner prevents problems arising with the jobs, such as authorization errors when they complete.

  2. Run helm upgrade --namespace <YOUR NAMESPACE> -f values.yaml gitlab-runner gitlab/gitlab-runner

Known Limitations

This section describes some problems and solutions you might experience when filling out values.yaml. Our Kubernetes clusters are set up in a specific way that might require additional configuration, so first check this documentation if you experience problems.

Fixed number of services

The Helm chart does not directly support setting the required security parameters of our clusters (security context, non-root user); they need to be overridden using a beta feature called pod_spec. The downside is that each container needs to be patched individually, so there needs to be a fixed number of containers.

Unless you are using services in your CI, the number of containers is constant, and you can use the default values.yaml.

If you are using services, you need to create a new runner for each number of services that you use, and execute jobs on different runners using tags, e.g., if you have jobs with 0 services and jobs with 3 services you will need 2 runners. values-3s.yaml is an example for a runner with exactly 3 services.

Further reading

Last updated on

publicity banner

On this page

einfra banner