bitnamicharts/cert-manager
Bitnami Helm chart for cert-manager
100K+
cert-manager is a Kubernetes add-on to automate the management and issuance of TLS certificates from various issuing sources.
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
helm install my-release oci://registry-1.docker.io/bitnamicharts/cert-manager
Looking to use cert-manager in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
Bitnami charts for Helm are carefully engineered, actively maintained and are the quickest and easiest way to deploy containers on a Kubernetes cluster that are ready to handle production workloads.
This chart bootstraps a cert-manager Deployment in a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/cert-manager
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
. Tip: List all releases usinghelm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars
property.
extraEnvVars:
- name: LOG_LEVEL
value: DEBUG
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM
or the extraEnvVarsSecret
values.
This chart can be integrated with Prometheus by setting metrics.enabled
to true
. This will expose cert-manager native Prometheus endpoint in the service. It will have the necessary annotations to be automatically scraped by Prometheus.
Prometheus requirements
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
Integration with Prometheus Operator
The chart can deploy ServiceMonitor
objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true
. Ensure that the Prometheus Operator CustomResourceDefinitions
are installed in the cluster or it will fail with the following error:
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
If you have a need for additional containers to run within the same pod as the cert-manager app (e.g. an additional metrics or logging exporter), you can do so via the sidecars
config parameter. Simply define your container according to the Kubernetes container spec.
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Similarly, you can add extra init containers using the initContainers
parameter.
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Cert Manager supports issuing certificates through different Issuers. For instance, you can use a Self Signed Issuer to issue the certificates.
The Self Signed issuer doesn't represent a certificate authority as such, but instead denotes that certificates will "sign themselves" using a given private key.
NOTE: Find the list of available Issuers in the Cert Manager official documentation.
To configure Cert Manager, create an Issuer object. The structure of this object differs depending on the Issuer type. Self Signed issuer are really easy to configure.
To create a self signed issuer to generate a self signed certificate, declare an Issuer, a ClusterIssuer and a Certificate, as shown below:
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
selfSigned: {}
---
apiVersion: cert-manager.io/v1
kind: Issuer
metadata:
name: letsencrypt-ca
namespace: sandbox
spec:
ca:
secretName: letsencrypt-ca
---
apiVersion: cert-manager.io/v1
kind: Certificate
metadata:
name: letsencrypt-ca
namespace: sandbox
spec:
isCA: true
commonName: osm-system
secretName: letsencrypt-ca
issuerRef:
name: letsencrypt-prod
kind: ClusterIssuer
group: cert-manager.io
Next, use the ClusterIssuer to generate certificates for the applications in your Kubernetes cluster. Learn how to secure your Ingress resources.
After the Ingress resource is ready, Cert Manager will create a secret. This secret contains the generated TLS certificate. This can be checked as shown below:
$ kubectl get secret --namespace=sandbox
NAME TYPE DATA AGE
letsencrypt-ca kubernetes.io/tls 3 Xs
Cert Manager supports issuing certificates through different Issuers. For instance, you can use a public ACME (Automated Certificate Management Environment) server to issue the certificates.
NOTE: Find the list of available Issuers in the Cert Manager official documentation.
To configure Cert Manager, create an Issuer object. The structure of this object differs depending on the Issuer type. For ACME, it is necessary to include the information for a single account registered in the ACME Certificate Authority server.
Once Cert Manager is configured to use ACME, it will verify that you are the owner of the domains for which certificates are being requested. Cert Manager uses two different challenges to verify that you are the owner of your domain: HTTP01 or DNS01. Learn more about ACME challenges.
NOTE: Learn more about the process to solve challenges in the official documentation.
To create a ACME issuer for use with Let's Encrypt, declare an Issuer as shown below:
apiVersion: cert-manager.io/v1alpha2
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
# You must replace this email address with your own.
# Let's Encrypt will use this to contact you about expiring
# certificates, and issues related to your account.
# Replace the EMAIL-ADDRESS placeholder with the correct email account
email: EMAIL-ADDRESS
server: https://acme-v02.api.letsencrypt.org/directory
privateKeySecretRef:
name: letsencrypt-prod
# Add a single challenge solver, HTTP01 using nginx
solvers:
- http01:
ingress:
class: nginx
Next, use the ClusterIssuer to generate certificates for the applications in your Kubernetes cluster. Learn how to secure your Ingress resources.
After the Ingress resource is ready, Cert Manager verifies the domain using HTTP01/DNS01 challenges. During this verification process, the controller log can be used to check the status, as shown below:
$ kubectl get certificates
NAME READY SECRET AGE
letencrypt-ca False letencrypt-ca X
The status remains False whilst verification is in progress. This status will change to True when the HTTP01 verification is completed successfully.
$ kubectl get certificates
NAME READY SECRET AGE
letencrypt-ca True letencrypt-ca X
$ kubectl get secrets
NAME TYPE DATA AGE
letencrypt-ca kubernetes.io/tls 3 Xm
Once you configure an Issuer for Cert Manager (either a Self-Signed Issuer or an ACME Issuer), Cert Manager will make use of this Issuer to create a TLS secret containing the certificates. Cert Manager can only create this secret if the application is already exposed. One way to do this is with an Ingress Resource which exposes the application and includes the corresponding annotations for Cert Manager.
There are two options to expose your application through an Ingress Controller using Cert Manager to manage the TLS certificates:
Deploy another Helm chart which supports exposing the application through an Ingress controller. For instance, use the Bitnami Helm Chart for WordPress and configure Ingress for WordPress. To enable the integration with CertManager, add the annotations below to the ingress.annotations parameter:
# Set up your ingress.class below (in this example, we are using nginx ingress controller)
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
Create your own Ingress resource as shown in the example below:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-test
annotations:
# Set up your ingress.class below (in this example, we are using nginx ingress controller)
kubernetes.io/ingress.class: "nginx"
cert-manager.io/issuer: "letsencrypt-prod"
spec:
tls:
# Replace the DOMAIN placeholder with the correct domain name
- hosts:
- DOMAIN
secretName: letsencrypt-ca
rules:
# Replace the DOMAIN placeholder with the correct domain name
- host: DOMAIN
http:
paths:
- path: /
pathType: Exact
backend:
service:
name: ingress-test
port:
number: 80
There are cases where you may want to deploy extra objects, such a ConfigMap containing your app's configuration or some extra deployment with a micro service used by your app. For covering this case, the chart allows adding the full specification of other objects using the extraDeploy
parameter.
This chart allows you to set your custom affinity using the controller.affinity
, cainjector.affinity
or webhook.affinity
parameters. Find more information about Pod's affinity in the kubernetes documentation.
As an alternative, you can make use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the controller.podAffinityPreset
, cainjector.podAffinityPreset
, webhook.podAffinityPreset
, controller.podAntiAffinityPreset
, cainjector.podAntiAffinityPreset
, webhook.podAntiAffinityPreset
, controller.nodeAffinityPreset
, cainjector.nodeAffinityPreset
or webhook.nodeAffinityPreset
parameters.
Name | Description | Value |
---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass | DEPRECATED: use global.defaultStorageClass instead | "" |
global.security.allowInsecureImages | Allows skipping image verification | false |
global.compatibility.openshift.adaptSecurityContext | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion | Override Kubernetes version | "" |
nameOverride | String to partially override common.names.fullname | "" |
fullnameOverride | String to fully override common.names.fullname | "" |
commonLabels | Labels to add to all deployed objects | {} |
commonAnnotations | Annotations to add to all deployed objects | {} |
extraDeploy | Array of extra objects to deploy with the release | [] |
logLevel | Set up cert-manager log level | 2 |
clusterResourceNamespace | Namespace used to store DNS provider credentials etc. for ClusterIssuer resources. If empty, uses the namespace where the controller is deployed. | "" |
leaderElection.namespace | Namespace which leaderElection works. | kube-system |
installCRDs | Flag to install cert-manager CRDs | false |
replicaCount | Number of cert-manager replicas | 1 |
Name | Description | Value |
---|---|---|
controller.replicaCount | Number of Controller replicas | 1 |
controller.image.registry | Controller image registry | REGISTRY_NAME |
controller.image.repository | Controller image repository | REPOSITORY_NAME/cert-manager |
controller.image.digest | Controller image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
controller.image.pullPolicy | Controller image pull policy | IfNotPresent |
controller.image.pullSecrets | Controller image pull secrets | [] |
controller.image.debug | Controller image debug mode | false |
controller.acmesolver.image.registry | Controller image registry | REGISTRY_NAME |
controller.acmesolver.image.repository | Controller image repository | REPOSITORY_NAME/acmesolver |
controller.acmesolver.image.digest | Controller image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
controller.acmesolver.image.pullPolicy | Controller image pull policy |
Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/cert-manager/README.md