FAQ
Question: Every Pod, Job, Deployment, and any other type that runs a container must have the resources attribute set, otherwise the deploy will fail with a similar error:
Answer: Pod, Job, Deployment is missing the following resource section with appropriate values:
Question: Deployment returns message simmilar to:
Answer: This error means the Pod you’re trying to create requests more memory (limits.memory=8Gi) than the available quota allows. Here, the namespace has a memory limit of 45,000 Mi (about 43.95 Gi)defined in the resource quota object (default-kcq58
, see kubectl get resourcequota -n [your-namespace] default-kcq58
), and your Pod creation would exceed this because the quota is nearly fully used (used: limits.memory=41872Mi).
Options for Resolving:
-
Reduce the Memory Request of the Pod:
Adjust the limits.memory in the Pod spec to request less memory, ideally fitting within the remaining quota (about 3128 Mi, or around 3 Gi).
-
Request an Increase in Quota (if you have control over the cluster settings):
Contact us at k8s@cerit-sc.cz to increase the memory quota for the namespace or create explicit project namespace with higher quotas then the personal namespace..
Question: No GPU found, nvidia-smi
returns command not found
.
Answer: The deployment is missing request for GPU like:
or
Question: Deployment returns message similar to:
Answer: The deployment is missing the securityContext
section and the container image (in this case mongo
) does not contain numeric USER
. To fix this, just extend the deployment definition like this:
The runAsUser
and runAsGroup
lines are important.
See full security context settings here.
Question: Helm deployment returns error code 413
Answer: HTML code 413 means entity too larger. Helm stores whole deployment (including all the local files in the chart no matter if they have .yaml
suffix) and values into a Secret
object. Limit of the Secret size is about 1.5MB. Verify, if there is no big file in the whole chart.
Question: How can I fix the following type of error:
Answer: The error message is due to the Pod not meeting the requirements of the restricted
PodSecurity standard in Kubernetes. To fix this, you need to add a securityContext
to the Pod and container specification. Here’s how you can address each issue:
- allowPrivilegeEscalation: Set this to
false
. - Capabilities: Drop all capabilities to meet the restricted policy.
- runAsNonRoot: Ensure the Pod or container is set to run as a non-root user.
- seccompProfile: Set the
seccompProfile.type
to"RuntimeDefault"
or"Localhost"
.
Example Pod spec:
Explanation of the Changes:
allowPrivilegeEscalation: false
: Prevents the container from gaining additional privileges.capabilities.drop: ["ALL"]
: Drops all Linux capabilities, which is required under the restricted policy.runAsNonRoot: true
: Ensures the container doesn’t run as the root user.seccompProfile.type: "RuntimeDefault"
: Enforces the default seccomp profile for additional security.
Applying the Updated Spec:
Replace your-image
with the appropriate container image name, and apply the updated configuration. This should resolve the error and allow the Pod to pass the restricted
PodSecurity admission.
Question: I am getting an “OpenSSL version mismatch. Built against 30000020, you have 30400010” error.
Answer: This error typically occurs in a Conda environment when the system’s OpenSSL version differs from the one used by Conda. The issue occurs because Conda libraries take precedence over system libraries, but certain system binaries (such as ssh
or git
which is calling ssh
) require the system’s OpenSSL version to function correctly.
As a temporary workaround, you can unset LD_LIBRARY_PATH
environment variable before running the problematic command. For example:
This forces the system to use its default OpenSSL version instead of the one provided by Conda.
Last updated on