einfra logoDocumentation
Kubernetes

Exposing Applications

This chapter explains how to configure external access to your application from an external network (or “outside”). Although it is possible to access your application directly via an IP address, this approach is discouraged. Instead, follow the instructions below to properly expose your application.

There are two main types of applications that can be exposed: web-based applications and other applications. The primary distinction between these types is their IP address requirements. Web-based applications share an IP address with other web-based applications, whereas other applications require a dedicated IP address for each service. Since the number of available IP addresses is limited, using the web-based approach is preferred whenever possible.

Throughout this documentation, YAML fragments are provided. These YAML definitions should be deployed using kubectl, like so:

kubectl create -f file.yaml -n namespace

where file.yaml contains the YAML definitions, and namespace is the Namespace where the application is running.

Prerequisite

This guide assumes that your application (Deployment) is already deployed. If you are unsure, refer to the Hello World example.


Web-based Applications

Web-based applications communicate using the HTTP protocol and are exposed using Ingress resources.

Overview

The overall architecture is illustrated in the following figure:

Networking

To expose a web-based application, you need:

  1. A Service that binds a port to your application.
  2. An Ingress resource that exposes the Service to the Internet.

Assume we have an application running on port 8080. First, we define a Service:

apiVersion: v1
kind: Service
metadata:
  name: application-svc
spec:
  type: ClusterIP
  ports:
  - name: application-port
    port: 80
    targetPort: 8080
  selector:
    app: application

In this configuration:

  • The selector: app: application must match the application label in the Deployment.
  • The application-svc and application-port values are arbitrary names.

After creating the Service, we define an Ingress:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: application-ingress
  annotations:
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "application.dyn.cloud.e-infra.cz"
      secretName: application-dyn-cloud-e-infra-cz-tls
  rules:
  - host: "application.dyn.cloud.e-infra.cz"
    http:
      paths:
      - backend:
          service:
            name: application-svc
            port:
              number: 80
        pathType: ImplementationSpecific

Here:

  • The service.name: application-svc must match the metadata name of the Service.
  • The Ingress exposes the application at the URL application.dyn.cloud.e-infra.cz, which serves as the accessible endpoint (functioning as the “IP” for reaching your application).
  • If the domain is within dyn.cloud.e-infra.cz, it will be registered automatically.
  • The tls section enables automatic certificate management via Let’s Encrypt.
📌

Note

If the tls section is included, TLS termination occurs at the NGINX Ingress. Your application should serve traffic over HTTP, while external communication is secured via HTTPS.

Important Considerations

Security Warning:

TLS is terminated at the cluster boundary when using NGINX Ingress. Communication within the cluster, especially within a single node, remains unencrypted. If full end-to-end encryption is required, you should:

  • Omit the tls and annotations sections.
  • Provide a certificate and key directly within your Pod.

Handling Mixed HTTP/HTTPS Configurations:

Some applications may be confused when configured to use HTTP internally but exposed as HTTPS. If your application generates absolute URLs, ensure they use HTTPS.

  • Ingress sets the HTTP_X_SCHEME header to HTTPS when TLS is terminated.

  • However, many applications expect X_FORWARDED_PROTO instead (which Kubernetes NGINX does not set). For example, in Django, you should set:

    SECURE_PROXY_SSL_HEADER = ("HTTP_X_SCHEME", "https")

If your application requires HTTPS end-to-end, see the HTTPS Target section below.


Custom Domain Name (FQDN)

It is possible to use a custom domain name instead of *.dyn.cloud.e-infra.cz.

  1. Set up a CNAME DNS record.

    • For the kuba-cluster, set the CNAME target to kuba-pub.cerit-sc.cz.
    • For the kubh-cluster, set the CNAME target to kubh-pub.cerit-sc.cz.
  2. Define an Ingress with the custom domain:

    apiVersion: networking.k8s.io/v1
    kind: Ingress
    metadata:
      name: application-ingress
      annotations:
        kubernetes.io/tls-acme: "true"
        cert-manager.io/cluster-issuer: "letsencrypt-prod"
    spec:
      ingressClassName: nginx
      tls:
        - hosts:
            - "my-name"
          secretName: my-name-tls
      rules:
      - host: "my-name"
        http:
          paths:
          - backend:
              service:
                name: application-svc
                port:
                  number: 80
            pathType: ImplementationSpecific
📌

Note

The CNAME must be pre-set and propagated before requesting a TLS certificate. Let’s Encrypt requires domain verification, meaning seamless migration from another system to this infrastructure is not easily possible.


Authentication

You can secure access to your Ingress-exposed application by requiring user authentication. This is done using a Secret and annotations.

  1. Create a Secret:

    apiVersion: v1
    kind: Secret
    metadata:
      name: secretref
    type: Opaque
    data:
       auth: password
    • secretref is an arbitrary name.

    • The password must be base64-encoded. To generate it:

      htpasswd -n username | base64 -w0

      This command returns a base64-encoded password string (like Zm9vOiRhgvYTN2aktYSDEKCg=), which should replace password in the Secret.

  2. Add the following annotations to the Ingress:

    nginx.ingress.kubernetes.io/auth-type: basic
    nginx.ingress.kubernetes.io/auth-secret: secretref
    • secretref must match the metadata name of the Secret.
⚠️

Important

Authentication only applies to external traffic. Traffic from other pods within the cluster bypasses authentication when communicating directly with the Service IP. To enforce stricter access control, consider using NetworkPolicy. See the Security section.


Large Data Uploads

If your application expects large file uploads, you may need to modify Ingress settings to accommodate higher upload limits:

nginx.ingress.kubernetes.io/proxy-body-size: "600m"
nginx.org/client-max-body-size: "600m"

Replace 600m with the desired maximum upload size.


HTTPS Target

If your application serves HTTPS traffic internally and you want to preserve encryption end-to-end, update your Ingress definition to include the following annotation:

nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"

This ensures that traffic between Ingress and the application remains encrypted. However, since the NGINX Ingress component terminates and re-encrypts the traffic, it does not provide true end-to-end encryption.


Limiting External Network Access

You can restrict access to your application at the Ingress level by limiting access to specific IP ranges using the whitelist-source-range annotation.

For example, to restrict access to only the MUNI network:

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: application-ingress
  annotations:
    kubernetes.io/tls-acme: "true"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
    nginx.ingress.kubernetes.io/whitelist-source-range: 147.251.0.0/16
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "my-name"
      secretName: my-name-tls
  rules:
  - host: "my-name"
    http:
      paths
      - backend:
          service:
            name: application-svc
            port:
              number: 80
        pathType: ImplementationSpecific
📌

Note

You can mix CIDRs (e.g., 147.251.0.0/16), individual IP addresses (e.g., 147.251.17.148), and both IPv4 and IPv6 addresses, all separated by commas. As IPv6 adoption continues to grow, be sure to include IPv6 addresses alongside IPv4 to maintain expected functionality.


Debugging: Certificate Not Issued

If your configuration is correct but the TLS certificate is not issued, the problem may be due to resource quotas preventing the required ACME challenge pod from running.

Steps to Debug

  1. Check if the Ingress is stuck on the ACME solver:

    kubectl get ingress -n [your_namespace]

    If the output includes cm-acme-http-solver with an age of more than a few minutes, it likely indicates an issue.

    > kubectl get ingress -n [your_namespace]
    NAME                        CLASS    HOSTS                     ADDRESS                PORTS     AGE
    cm-acme-http-solver-972p9   <none>   schema.biodata.ceitec.cz  kuba-pub.cerit-sc.cz   80        12m
  2. Check for quota-related warnings:

    kubectl get events -n [your_namespace]

    Look for lines mentioning exceeded quota.

    > kubectl get events -n [your_namespace] | grep cm-acme-http-solver
    ...
    79s         Warning   PresentError   challenge/adamant-dyn-cloud-e-infra-cz-6dp4n-152868974-3440702542   (combined from similar events): Error presenting challenge: pods "cm-acme-http-solver-74szl" is forbidden: exceeded quota: default-kn7vq, requested: limits.cpu=100m, used: limits.cpu=1, limited: limits.cpu=1
    ...

If quota restrictions are the issue, you may need to:

  • Increase the namespace resource quota. Each acme solver requires 100m CPUs and 100MiB of memory.
  • Remove some existing deployments.

You can also view events in the Rancher UI.

acmesolver

Custom Error Pages

By default, our NGINX Ingress Controller in Kubernetes intercepts HTTP errors 500 (Internal Server Error) and 503 (Service Unavailable) and displays a custom error page with brief instructions on how to resolve the issue. However, this behavior is often undesirable for applications that intentionally return these error codes, such as JSON-based REST APIs.

To allow the original error response to be passed to the client, you need to disable the default error handling. While it is not possible to completely disable error handling once it is globally enabled, you can control which error codes are handled using the nginx.ingress.kubernetes.io/custom-http-errors annotation.

Solution

Set the nginx.ingress.kubernetes.io/custom-http-errors annotation to an unused error code, such as 599, which effectively disables error handling for all other errors (including 500 and 503), as NGINX will only attempt to handle the specified error code (599).

Example

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
    kubernetes.io/tls-acme: 'true'
    nginx.ingress.kubernetes.io/custom-http-errors: '599'

With this configuration, NGINX will no longer intercept 500 and 503 errors, allowing the original error message from the application to be returned to the client.


Other Applications

For applications that do not use the HTTP protocol, exposure is done via a Service without Ingress.

apiVersion: v1
kind: Service
metadata:
  name: application-svc
  annotations:
    external-dns.alpha.kubernetes.io/hostname: application.dyn.cloud.e-infra.cz
spec:
  type: LoadBalancer
  allocateLoadBalancerNodePorts: false
  ports:
  - port: 22
    targetPort: 2222
  selector:
    app: application
  • The selector.app: application must match the Deployment.
  • If the annotation external-dns.alpha.kubernetes.io/hostname is set within dyn.cloud.e-infra.cz, the name is automatically registered in DNS.

Checking the Assigned Public IP

Run:

kubectl get svc -n namespace

where namespace is the Namespace of the Service. The IP address will be displayed in the EXTERNAL-IP column.

Private MUNI IP

To expose the application only within the MUNI network or MUNI VPN, use the following Service annotation:

metallb.universe.tf/address-pool: privmuni

This method is preferred as it avoids consuming a public IP.

apiVersion: v1
kind: Service
metadata:
  name: application-svc
  annotations:
    metallb.universe.tf/address-pool: privmuni
    external-dns.alpha.kubernetes.io/hostname: application.dyn.cloud.e-infra.cz
spec:
  type: LoadBalancer
  allocateLoadBalancerNodePorts: false
  ports:
  - port: 22
    targetPort: 2222
  selector:
    app: application

Last updated on