einfra logoDocumentation
GitOps

GitLab Kubernetes Agent

CI/CD Integration Between GitLab and Kubernetes

This guide outlines the steps required to set up continuous integration and deployment (CI/CD) between GitLab and Kubernetes. The CI/CD process is managed by the GitLab Kubernetes Agent, which operates within the target namespace—i.e., the namespace where CI/CD actions are executed.

Limitations

A key limitation of this setup is that multi-namespace deployments from a single GitLab repository are not possible without administrative intervention. Specifically, a cluster-wide RBAC role is required to enable such deployments.

Conceptual Overview

From a conceptual standpoint, the GitLab Kubernetes Agent functions as a proxy between GitLab (more precisely, the GitLab Runner that executes CI/CD pipeline jobs) and the Kubernetes API. The agent uses a Kubernetes Service Account to authenticate with the Kubernetes API.

This Service Account must be granted adequate permissions to carry out CI/CD operations. These permissions are defined via Kubernetes RBAC resources: Role, RoleBinding.

Use Case: Deploying a Helm Chart

CI/CD pipelines can automate a wide range of tasks. This guide focuses on the most common scenario: deploying a Helm chart from a Git repository into a Kubernetes namespace.

The following sections will describe the necessary configuration steps to achieve this deployment workflow.

Prerequisites

  • A namespace on your Kubernetes cluster (e.g., kuba-cluster from CERIT-SC)
  • A GitLab repository (referenced as https://gitlab.ics.muni.cz/cerit-sc/gitlab-test)
  • kubectl and helm tools installed and configured to manage the Kubernetes namespace

GitLab Setup

Assume the GitLab repository (https://gitlab.ics.muni.cz/cerit-sc/gitlab-test) contains a Helm chart similar to our Langflow chart.

Log in to GitLab, navigate to your repository, and go to OperateKubernetes clusters. Select Connect a cluster, as shown in the screenshot below:

gitlab1

Ignore the Flux section and enter a name for the agent, for example, my-agent.

On the next screen, make note of the access token and the initial helm setup commands (the first two lines shown).

gitlab2

Do not expose the access token! It can be used to control your Kubernetes namespace. This is the only time the token will be displayed. If you lose it, you will need to register the cluster again to obtain a new token.

You can now close the setup dialog. By default, the agent uses the generated configuration. If you need a custom configuration, create a new file in your Git repository at .gitlab/agents/<agent-name>/config.yaml. Make sure <agent-name> matches the name used during the setup (e.g., .gitlab/agents/my-agent/config.yaml).

A sample custom agent configuration is shown below:

config.yaml
ci_access:
  projects:
    - id: cerit-sc/gitlab-test
gitops:
  manifest_projects:
    - id: gitlab-test
      default_namespace: gitlab-test-ns
      paths:
        - glob: '*.yaml'
        - glob: 'templates/*'
        - glob: 'charts/*'

Kubernetes Setup

Install the Helm repository and update it:

helm repo add gitlab https://charts.gitlab.io
helm repo update

Create a new file values.yaml:

values.yaml
replicas: 1
resources:
  requests:
    cpu: "50m"
    memory: "50Mi"
  limits:
    cpu: "1"
    memory: "512Mi"
podSecurityContext:
   runAsNonRoot: true
   seccompProfile:
     type: RuntimeDefault
securityContext:
   runAsUser: 1000
   allowPrivilegeEscalation: false
   capabilities:
     drop:
     - ALL
rbac:
  create: false
config:
  operational_container_scanning:
    enabled: false
  kasAddress: 'wss://gitlab.ics.muni.cz/-/kubernetes-agent/'
  token: [token-from-connect-a-cluster]

Create a file role.yaml:

role.yaml
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
  name: gitlab-agent
rules:
- apiGroups: [""]
  resources: [events]
  verbs: [create]
- apiGroups: ["coordination.k8s.io"]
  resources: [leases]
  verbs: [create, get, delete, watch, update]

- apiGroups: [""]
  resources: [configmaps, secrets, persistentvolumeclaims, services, serviceaccounts]
  verbs: [create, get, delete, list, patch, update]
- apiGroups: ["apps"]
  resources: [deployments, statefulsets]
  verbs: [create, get, delete, patch]
- apiGroups: ["networking.k8s.io"]
  resources: [ingresses, networkpolicies]
  verbs: [create, get, delete, patch]
- apiGroups: ["postgresql.cnpg.io"]
  resources: [clusters]
  verbs: [create, get, delete, patch]
- apiGroups: ["policy"]
  resources: [poddisruptionbudgets]
  verbs: [create, get, delete, patch]

The first two resources (events and leases) are always required. Additional resources must be listed based on those used in your Helm chart. If a required resource is not specified in the role, the GitLab pipeline job will fail with an error such as:

Error: Unable to continue with install: could not get information about the resource 
PodDisruptionBudget "langflow-redis-master" in namespace "gitlab-test-ns": 
poddisruptionbudgets.policy "langflow-redis-master" is forbidden: 
User "system:serviceaccount:gitlab-test-ns:test-gitlab-gitlab-agent" cannot get 
resource "poddisruptionbudgets" in API group "policy" in the namespace "gitlab-test-ns": 
RBAC: clusterrole.rbac.authorization.k8s.io "fleet-content" not found

This indicates a missing role configuration, such as:

- apiGroups: ["policy"]
  resources: [poddisruptionbudgets]
  verbs: [create, get, delete, patch]

Helm typically requires the verbs [create, get, delete, patch], and occasionally also [list] and [update]. These verbs represent the actions permitted for a given resource and API group, and can be combined as needed.

You can identify required apiGroups (shown in YAML files as apiVersion) and resources (shown as kind) from your Helm chart using the following command in your local Git repository:

for i in `find . -type f -name '*.yaml'`; do grep '^apiVersion:' $i | sort -u; grep '^kind:' $i | sort -u; done

If apiVersion is simply v1 or v2, the corresponding apiGroup is an empty string: [""]. Otherwise, the apiGroup is the portion of apiVersion before the /, i.e., the version suffix like /v1 or /v2 is removed. For example, apps/v1apiGroup: ["apps"].

The kind field is listed in singular form (e.g., Secret), whereas the resources field in the role must use the plural, lowercase form (e.g., secrets). Watch out for irregular plurals like networkpolicies.

Create a rolebinding.yaml file:

apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: gitlab-agent
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: Role
  name: gitlab-agent
subjects:
- kind: ServiceAccount
  name: [gitlab-agent-serviceaccount-name]
  namespace: [yournamespace]

Replace [yournamespace] with your actual namespace and [gitlab-agent-serviceaccount-name] with the name derived from your Helm release name suffixed with -gitlab-agent. For example, if your Helm release is test-gitlab, use test-gitlab-gitlab-agent.

Deploy the role and role binding:

kubectl create -f role.yaml -n [yournamespace]
kubectl create -f rolebinding.yaml -n [yournamespace]

Install the GitLab Kubernetes Agent:

helm install test-gitlab gitlab/gitlab-agent -n [yournamespace] -f values.yaml

The values.yaml file referenced above must be saved in your current working directory before running the Helm install command.

Once deployed, the GitLab Agent should be running within your specified namespace. You can verify this by listing the pods:

kubectl get pods -n [yournamespace]

To check if the agent is working properly, inspect the logs of the agent Pod. This can be done either through your Kubernetes dashboard (e.g., Rancher) or via the command line:

kubectl logs [gitlab-agent-pod] -n [yournamespace]

You may encounter a log message such as:

{"time":"2025-05-23T14:52:22.709117293Z","level":"INFO","msg":"Flux could not be detected or the Agent is missing RBAC, skipping module. A restart is required for this to be checked again","mod_name":"flux"}

This message is normal and expected. It simply indicates that the Flux module is not in use because the FluxCD is not installed in the cluster. Unless you’re explicitly using Flux-based GitOps workflows, this can be safely ignored.

CI/CD Setup

The final step in the deployment process is configuring CI/CD within your GitLab repository. Create a .gitlab-ci.yml file at the root of your repository with the following contents:

deploy:
  image:
    name: alpine/k8s:1.32.4
  script:
    - kubectl config use-context [yourproject]:[your-agent-name] # e.g., cerit-sc/gitlab-test:my-agent
    - helm upgrade --install langflow -n [yournamespace] . -f values.yaml

Replace [yourproject], [your-agent-name], and [yournamespace] with your actual GitLab project path, configured agent name, and Kubernetes namespace, respectively.

If you’re unsure what value to use in the use-context line, you can add the following command to the top of your script section to list all available contexts:

    - kubectl config get-contexts

Check the job logs in GitLab after the pipeline runs to see the available contexts and verify that the correct one is being used.

Saving this file should automatically trigger a CI/CD pipeline in GitLab. If the job fails due to RBAC permissions (similar to the errors described earlier), simply update your role.yaml file with the necessary permissions and reapply it:

kubectl apply -f role.yaml -n [yournamespace]

After applying the updated role, you can rerun the failed pipeline job. There is no need to restart or reinstall the GitLab Agent.

Last updated on

publicity banner