MinIO Operator
The MinIO Operator is deployed as a cluster-wide service on Kubernetes. It enables users to provision and manage MinIO S3 object storage tenants across the entire cluster.
The MinIO Operator introduces a custom resource called Tenant
, which ensures the creation and management of object storage instances. You can find the full documentation on its structure here. Additionally, the official guide for creating a MinIO Tenant is available here.
To make things easier, we provide several working examples below, organized into sections.
The overall schema of MinIO components is shown below:
Deploying a Single Instance
You can start with a sample instance, ideal for testing purposes. This example creates a Tenant
resource managed by the installed MinIO Operator. You can download the example here.
apiVersion: minio.min.io/v2
kind: Tenant
metadata:
name: myminio
spec:
## Create users in the Tenant using this field. Make sure to create secrets per user added here.
## Secret should follow the format used in `minio-creds-secret`.
users:
- name: myminio-user-secret
## if selfsigned certs are not required
requestAutoCert: false
## Pre create buckets
buckets:
- name: mybucket
## Environment variables, must contain MINIO_ROOT_USER and MINIO_ROOT_PASSWORD variables at least.
configuration:
name: myminio-config-secret
## Specification for MinIO Pool(s) in this Tenant.
pools:
## Servers specifies the number of MinIO Tenant Pods / Servers in this pool.
## For standalone mode, supply 1. For distributed mode, supply 4 or more.
## Note that the operator does not support upgrading from standalone to distributed mode.
- servers: 1
## custom pool name
name: pool-0
## volumesPerServer specifies the number of volumes attached per MinIO Tenant Pod / Server.
volumesPerServer: 1
## select NFS storage class
## This VolumeClaimTemplate is used across all the volumes provisioned for MinIO Tenant in this Pool.
volumeClaimTemplate:
metadata:
name: data
spec:
storageClassName: nfs-csi
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
## Configure Pod's security context
securityContext:
runAsNonRoot: true
fsGroup: 1000
runAsUser: 1000
runAsGroup: 1000
## Configure container security context
containerSecurityContext:
runAsNonRoot: true
privileged: false
runAsUser: 1000
runAsGroup: 1000
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
seccompProfile:
type: "RuntimeDefault"
This example also requires deploying a secret for the user. You can download the secret example.
apiVersion: v1
kind: Secret
metadata:
name: myminio-user-secret
type: Opaque
data:
CONSOLE_ACCESS_KEY: Y29uc29sZQ== # "console", base64 encoding
CONSOLE_SECRET_KEY: Y29uc29sZTEyMw== # "console123", base64 encoding
To define your own access_key
(username) and secret_key
(password), you can manually encode or decode values using the following commands:
# encode
echo -n "securepassword" | base64
# decode
echo -n "c2VjdXJlcGFzc3dvcmQ=" | base64 -d
The last required part is a configuration secret shown below:
apiVersion: v1
kind: Secret
metadata:
name: myminio-config-secret
type: Opaque
stringData:
config.env: |-
export MINIO_ROOT_USER=[replace with root username]
export MINIO_ROOT_PASSWORD=[replace with root password, at least 8 characters!]
The MINIO_ROOT_USER
and MINIO_ROOT_PASSWORD
variables are not base64 encoded because the manifest defines them as the stringData
.
Next, deploy the secret and the MinIO Tenant with the following command:
kubectl create -n [namespace] -f example-tenant-secret.yaml -f example-tenant.yaml
If the deployment is successful, you should see a pod named myminio-pool-0-0
(or similar, depending on your configuration) running in the specified namespace.
Minio Access
To access the MinIO storage from other pods, you can use the myminio-console
and myminio-hl
services, which are automatically created. These services act as DNS names, allowing other applications within the Kubernetes cluster to use them as endpoints.
If you need to access MinIO from a different namespace, use the fully qualified domain name (FQDN), e.g., myminio-hl.[namespace].svc.cluster.local
.
myminio-console:9443
— Provides a web-based interface for managing MinIO.myminio-hl:9000
orminio:<PORT>
— Provides the S3-compatible API endpoint for object storage access. The<PORT>
is either443
(ifrequestAutoCert=true
) or80
(ifrequestAutoCert=false
).
Certificates
If requestAutoCert=true
, self-signed certificates are automatically generated. As a result, applications may report an unknown Certificate Authority (CA). To resolve this, you can either provide the correct CA certificate to the application or configure a custom Issuer to use external certificates for MinIO. If the default AutoCert
is used, the corresponding CA certificate is available in each namespace within the ConfigMap named kube-root-ca.crt
, under the ca.crt
key.
Exposing MinIO Outside the Kubernetes Cluster via Ingress
To access MinIO from outside the Kubernetes cluster, it is recommended to use the Tenant
Helm chart and configure the ingress
section of the Tenant
resource.
Step 1: Add the MinIO Helm Repository
First, add the MinIO Operator Helm repository and update the local Helm cache:
helm repo add minio-operator https://operator.min.io
helm repo update
Step 2: Generate Default Values
Create a default values.yaml
file:
helm show values minio-operator/tenant > values.yaml
Step 3: Configure Ingress in values.yaml
Edit the values.yaml
file and configure the ingress
section as follows:
ingress:
api:
enabled: true
ingressClassName: "nginx"
labels: { }
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/custom-http-errors: "599"
tls:
- hosts:
- myminio-my-namespace.dyn.cloud.e-infra.cz
secretName: my-secret-cloud-e-infra-cz
host: myminio-my-namespace.dyn.cloud.e-infra.cz
path: /
pathType: Prefix
console:
enabled: true
ingressClassName: "nginx"
labels: { }
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/backend-protocol: HTTPS
nginx.ingress.kubernetes.io/custom-http-errors: "599"
tls:
- hosts:
- myminio-ui-my-namespace.dyn.cloud.e-infra.cz
secretName: my-secret-ui-cloud-e-infra-cz
host: myminio-ui-my-namespace.dyn.cloud.e-infra.cz
path: /
pathType: Prefix
Step 4: Install the Tenant
Install the MinIO Tenant using the customized values.yaml
file:
helm install myminio -n [mynamespace] minio-operator/tenant -f values.yaml
Step 5: Access MinIO
- MinIO Console:
https://myminio-ui-my-namespace.dyn.cloud.e-infra.cz
- S3 API Endpoint:
https://myminio-my-namespace.dyn.cloud.e-infra.cz
✅ Important Notes:
- Separate TLS Secrets: Do not use the same
secretName
for both the API and the Console. Each Ingress resource must have a unique TLS secret. - Backend Protocol Handling: If
requestAutoCert=true
, ensure the annotationnginx.ingress.kubernetes.io/backend-protocol: HTTPS
is set. - MinIO Error Handling: Set
nginx.ingress.kubernetes.io/custom-http-errors: "599"
to prevent custom error handling by NGINX, which could hide MinIO’s own error messages.
Deploying MinIO With Proxy Support
When deploying a Tenant
in a Kubernetes cluster that requires an HTTP(S) proxy, you need to configure the proxy settings via environment variables.
Step 1: Set the Proxy Environment Variables
HTTPS_PROXY
:http://proxy.ics.muni.cz:3128
HTTP_PROXY
:http://proxy.ics.muni.cz:3128
NO_PROXY
:.svc.cluster.local,.svc,10.0.0.0/8,147.251.0.0/16,2001:718:801::/48,127.0.0.1,::1,.cloud.trusted.e-infra.cz
The NO_PROXY
variable is essential to ensure that MinIO does not route traffic through the proxy for local cluster networks. This is particularly important for communication within the kubas-cluster
.
Step 2: Configure the values.yaml
File
Add the following configuration under the tenant.env
section:
tenant:
env:
- name: HTTPS_PROXY
value: http://proxy.ics.muni.cz:3128
- name: HTTP_PROXY
value: http://proxy.ics.muni.cz:3128
- name: NO_PROXY
value: ".svc.cluster.local,.svc,10.0.0.0/8,147.251.0.0/16,2001:718:801::/48,127.0.01,::1,.cloud.trusted.e-infra.cz"
✅ Notes:
- Ensure that the
NO_PROXY
value includes all local networks and internal services to avoid unnecessary proxy usage. - Double-check the
NO_PROXY
settings, especially for IPv6 ranges, as incorrect configuration can cause connectivity issues within the cluster.
Network Policy
To enhance security, a NetworkPolicy can be used to restrict access to the MinIO S3 storage, allowing connections only from specific pods. For more details, see the Network Policy documentation.
By default, external access (e.g., from the public internet) is disabled. However, it is possible to expose MinIO via a Load Balancer or Ingress (see above) if needed.
Example 1: Network Policy for MinIO Tenant
The following example restricts ingress and egress traffic for the MinIO Tenant
. It allows:
- Egress communication to the local DNS resolver.
- Egress communication to the MinIO Operator installed within the Kubernetes cluster.
- Ingress communication from any pod to the MinIO S3 API (
9000
) and web console (9443
) ports.
You can download this example here.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: myminio-np
spec:
podSelector:
matchLabels:
# This NetworkPolicy is applied to the `myminio` Tenant
app: myminio
policyTypes:
- Egress
- Ingress
egress:
# Enables egress communication to the local DNS resolver
- to:
- namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: kube-system
podSelector:
matchLabels:
k8s-app: kube-dns
ports:
- protocol: UDP
port: 53
# Enables egress communication to the Minio operator installed in Kubernetes.
- to:
- ipBlock:
cidr: 10.43.0.1/32
- ipBlock:
cidr: 10.16.62.14/32
- ipBlock:
cidr: 10.16.62.15/32
- ipBlock:
cidr: 10.16.62.16/32
- ipBlock:
cidr: 10.16.62.17/32
- ipBlock:
cidr: 10.16.62.18/32
ports:
- port: 6443
protocol: TCP
ingress:
# Enables ingress communication from any Pod to the 9000 and 9443 ports
# for web management console access and S3 connection
- from:
- podSelector: {}
ports:
- protocol: TCP
port: 9000
- protocol: TCP
port: 9443
Example 2: Network Policy for Application Accessing MinIO
The following policy allows a specific application (myapplication
) to access the MinIO S3 API on port 9000
.
You can download this example here.
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: myapplication-allow-s3
spec:
podSelector:
matchLabels:
# This NetworkPolicy is applied to `myapplication` Pod
app: myapplication
policyTypes:
- Egress
egress:
# Enables egress S3 communication from `myapplication` to the `myminio` Tenant
- to:
- podSelector:
matchLabels:
app : myminio
ports:
- protocol: TCP
port: 9000
✅ Summary:
The MinIO Tenant NetworkPolicy secures the MinIO instance by controlling both ingress and egress traffic. The Application NetworkPolicy allows only specific applications to interact with MinIO S3 API.
OIDC Authentication for MinIO with e-INFRA CZ SSO
MinIO can be configured to use e-INFRA CZ Single Sign-On (SSO) via OIDC (OpenID Connect). This guide assumes that your MinIO service has been registered in e-INFRA CZ SP Admin Portal and you have obtained the Client ID and Client Secret.
Step 1: Register MinIO in SP Admin
During the registration process in spadmin.e-infra.cz, configure the following:
- Callback URL:
https://<your-console-url>/oauth_callback
- Scopes:
eduperson_entitlement, profile, email
- Authorization flow:
authorization code
- Token endpoint authentication method:
client_secret_basic
- PKCE Code Challenge Method:
SHA256 code challenge
- Enable introspection endpoint access
Step 2: Configure MinIO Environment Variables
On the MinIO side, set the following environment variables:
MINIO_IDENTITY_OPENID_CONFIG_URL: https://login.e-infra.cz/oidc/.well-known/openid-configuration
MINIO_IDENTITY_OPENID_CLIENT_ID: <client id>
MINIO_IDENTITY_OPENID_CLIENT_SECRET: <client secret>
MINIO_IDENTITY_OPENID_SCOPES: eduperson_entitlement,profile,email
MINIO_IDENTITY_OPENID_CLAIM_NAME: eduperson_entitlement
MINIO_IDENTITY_OPENID_REDIRECT_URI: https://<console url>/oauth_callback
MINIO_IDENTITY_OPENID_DISPLAY_NAME: "e-INFRA CZ"
MINIO_IDENTITY_OPENID_CLAIM_USERINFO: on
Step 3: Store Secrets Securely
It is not recommended to store sensitive data, such as MINIO_IDENTITY_OPENID_CLIENT_SECRET
, directly in the values.yaml
file. Instead, create a Kubernetes Secret:
apiVersion: v1
kind: Secret
metadata:
name: <secret-name>
type: Opaque
stringData:
config.env: |-
export MINIO_ROOT_USER="<admin-username>"
export MINIO_ROOT_PASSWORD="<admin-password>"
export MINIO_IDENTITY_OPENID_CONFIG_URL: https://login.e-infra.cz/oidc/.well-known/openid-configuration
export MINIO_IDENTITY_OPENID_CLIENT_ID: <client-id>
export MINIO_IDENTITY_OPENID_CLIENT_SECRET: <client-secret>
export MINIO_IDENTITY_OPENID_SCOPES: eduperson_entitlement,profile,email
export MINIO_IDENTITY_OPENID_CLAIM_NAME: eduperson_entitlement
export MINIO_IDENTITY_OPENID_REDIRECT_URI: https://<console url>/oauth_callback
export MINIO_IDENTITY_OPENID_DISPLAY_NAME: "e-INFRA CZ"
export MINIO_IDENTITY_OPENID_CLAIM_USERINFO: on
export HTTPS_PROXY=http://proxy.ics.muni.cz:3128
export NO_PROXY=".svc.cluster.local,.svc,10.0.0.0/8,147.251.0.0/16,2001:718:801::/48,127.0.01,::1,.cloud.trusted.e-infra.cz"
Replace:
<admin-username>
with your MinIO admin username.<admin-password>
with your MinIO admin password.<client-id>
and<client-secret>
with values from the SP Admin registration.<your-console-url>
with your MinIO console URL.<secret-name>
with a custom name for the Secret.
Step 4: Reference the Secret in values.yaml
In your values.yaml
file, reference the Secret as follows:
tenant:
configSecret:
name: <secret-name>
existingSecret: true
✅ Important Notes:
- The MinIO admin credentials (
MINIO_ROOT_USER
andMINIO_ROOT_PASSWORD
) are still required for the initial setup, even when OIDC is enabled. - Ensure the Secret name (
<secret-name>
) matches theconfigSecret.name
invalues.yaml
. - The
HTTP_PROXY
andNO_PROXY
variables are optional. Add them only if your cluster requires a proxy for external communication.
OIDC Roles Management in MinIO
When logging into MinIO via e-INFRA CZ SSO, you will encounter a consent page similar to this:
It is crucial to note the role (group) name, which will be used for MinIO Access Policy. In this example, the role is urn:geant:muni:cz:group:MU:kubernetes-admins@idm.ics.muni.cz
, highlighted in the red rectangle above.
Step 1: Log in as an Administrator
The initial login via SSO will likely fail due to missing role-based policies. To proceed:
- Click Other Authentication Methods on the initial MinIO console web page..
- Select Use Credentials.
- Log in using the MinIO admin credentials from above:
- Username:
<admin-username>
- Password:
<admin-password>
- Username:
Step 2: Create a Policy for the Role
Navigate to Administrator → Policies and create a new policy with the exact name of the role, e.g.,
urn:geant:muni:cz:group:MU:kubernetes-admins@idm.ics.muni.cz
.
In the policy editor, define the policy similar to the following JSON:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::${jwt:preferred_username}-*"
]
},
{
"Effect": "Allow",
"Action": [
"s3:*"
],
"Resource": [
"arn:aws:s3:::shared-bucket"
]
}
]
}
✅ Explanation of the Policy:
-
This policy allows users with the specified OIDC role to create and manage buckets with the naming pattern
<loginname>-something
.For example, a user with the login
xhejtman
can create a bucket calledxhejtman-test
. -
It also grants full access to the
shared-bucket
, which should be pre-created by the MinIO admin. -
However, since this policy allows deleting the bucket as well, consider restricting the actions (e.g.,
s3:PutObject
,s3:GetObject
) based on your security needs. See the MinIO policy documentation for more details.
🎯 Final Notes:
- The policy name must exactly match the OIDC role name, otherwise the access will be denied.
- You can assign different policies to different roles to manage access control at a more granular level.
Last updated on