bitnamicharts/cilium
Bitnami Helm chart for Cilium
50K+
Cilium is an eBPF-based networking, observability, and security for Linux container management platforms like Docker and Kubernetes.
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
helm install my-release oci://registry-1.docker.io/bitnamicharts/cilium
Looking to use Cilium in production? Try VMware Tanzu Application Catalog, the enterprise edition of Bitnami Application Catalog.
Bitnami charts for Helm are carefully engineered, actively maintained and are the quickest and easiest way to deploy containers on a Kubernetes cluster that are ready to handle production workloads.
This chart bootstraps a Cilium deployment in a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/cilium
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The command deploys Cilium on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
You may want to have Cilium connect to an external key-value store rather than installing one inside your cluster. Typical reasons for this are to use a managed service, or to share a common store for all your applications. To achieve this, the chart allows you to specify credentials for an external database with the externalKvstore
parameter. You should also disable the etcd installation with the etcd.enabled
option. Here is an example:
etcd.enabled=false
externalKvstorehost.enabled=true
externalKvstorehost.endpoints[0]=external-kvstore-host-0:2379
externalKvstorehost.endpoints[1]=external-kvstore-host-1:2379
Please also note the chart installs the Cilium CNI plugin on the Kubernetes nodes by default. If you want to disable this behavior, set the agent.cniPlugin.install
parameter to false
.
It's also necessary to know the paths where the CNI binary and configuration files are located in your Kubernetes nodes. The chart assumes that the CNI binary is located in the /opt/cni/bin
directory and the CNI configuration files are located in the /etc/cni/net.d
directory. You can customize these paths using the agent.cniPlugin.hostCNIBinDir
and agent.cniPlugin.hostCNINetDir
parameters.
This chart supports encrypting communications between Hubble components using TLS. To enable this feature, set the hubble.tls.enabled
.
It is necessary to create a secret containing the TLS certificates and pass it to the chart via the hubble.tls.existingCASecret
, hubble.tls.peers.existingSecret
, hubble.tls.relay.existingSecret
and hubble.tls.client.existingSecret
parameters. Every secret should contain a tls.crt
and tls.key
keys including the certificate and key files respectively. For example: create the CA secret with the certificates files:
kubectl create secret generic ca-tls-secret --from-file=./tls.crt --from-file=./tls.key
You can manually create the required TLS certificates or relying on the chart auto-generation capabilities. The chart supports two different ways to auto-generate the required certificates:
hubble.tls.autoGenerated.enabled
to true
and hubble.tls.autoGenerated.engine
to helm
.hubble.tls.autoGenerated.enabled
to true
and hubble.tls.autoGenerated.engine
to cert-manager
. Please note it's supported to use an existing Issuer/ClusterIssuer for issuing the TLS certificates by setting the hubble.tls.autoGenerated.certManager.existingIssuer
and hubble.tls.autoGenerated.certManager.existingIssuerKind
parameters.To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
This chart can be integrated with Prometheus by setting *.metrics.enabled
(under the agent
, envoy
, operator
, hubble.peers
and hubble.relay
sections) to true
. This will expose the Cilium, Hubble and Envoy native Prometheus ports in the containers. Additionally, it will deploy several metrics
services, which can be configured under the *.metrics.service
section (under the agent
, envoy
, operator
, hubble.peers
and hubble.relay
sections). These metrics
services will have the necessary annotations to be automatically scraped by Prometheus.
Prometheus requirements
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
Integration with Prometheus Operator
The chart can deploy ServiceMonitor
objects for integration with Prometheus Operator installations. To do so, set the value *.metrics.serviceMonitor.enabled=true
(under the agent
, envoy
, operator
, hubble.peers
and hubble.relay
sections). Ensure that the Prometheus Operator CustomResourceDefinitions
are installed in the cluster or it will fail with the following error:
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve Hubble UI. To enable Ingress integration, set hubble.ui.enabled
and hubble.ui.ingress.enabled
to true
.
The most common scenario is to have one host name mapped to the deployment. In this case, the hubble.ui.ingress.hostname
property can be used to set the host name. The ui.ingress.tls
parameter can be used to add the TLS configuration for this host.
However, it is also possible to have more than one host. To facilitate this, the hubble.ui.ingress.extraHosts
parameter (if available) can be set with the host names specified as an array. The hubble.ui.ingress.extraTLS
parameter (if available) can also be used to add the TLS configuration for extra hosts.
NOTE: For each host specified in the
hubble.ui.ingress.extraHosts
parameter, it is necessary to set a name, path, and any annotations that the Ingress controller should know about. Not all annotations are supported by all Ingress controllers, but this annotation reference document lists the annotations supported by many popular Ingress controllers.
Adding the TLS parameter (where available) will cause the chart to generate HTTPS URLs, and the application will be available on port 443. The actual TLS secrets do not have to be generated by this chart. However, if TLS is enabled, the Ingress record will not work until the TLS secret exists.
Learn more about Ingress controllers.
In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars
property. For instance:
agent:
extraEnvVars:
- name: LOG_LEVEL
value: error
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM
or the extraEnvVarsSecret
values.
If additional containers are needed in the same pod as Ciliuma (such as additional metrics or logging exporters), they can be defined using the agent.sidecars
parameter.
agent:
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
If these sidecars export extra ports, extra port definitions can be added using the agent.service.extraPorts
parameter (where available), as shown in the example below:
agent:
service:
extraPorts:
- name: extraPort
port: 11311
targetPort: 11311
If additional init containers are needed in the same pod, they can be defined using the agent.initContainers
parameter. Here is an example:
agent:
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Learn more about sidecar containers and init containers.
This chart allows you to set your custom affinity using the affinity
parameter. Find more information about Pod affinity in the kubernetes documentation.
As an alternative, use one of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset
, podAntiAffinityPreset
, or nodeAffinityPreset
parameters.
Name | Description | Value |
---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass | DEPRECATED: use global.defaultStorageClass instead | "" |
global.security.allowInsecureImages | Allows skipping image verification | false |
global.compatibility.openshift.adaptSecurityContext | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion | Override Kubernetes version | "" |
nameOverride | String to partially override common.names.name | "" |
fullnameOverride | String to fully override common.names.fullname | "" |
namespaceOverride | String to fully override common.names.namespace | "" |
commonLabels | Labels to add to all deployed objects | {} |
commonAnnotations | Annotations to add to all deployed objects | {} |
clusterDomain | Kubernetes cluster domain name | cluster.local |
extraDeploy | Array of extra objects to deploy with the release | [] |
diagnosticMode.enabled | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command | Command to override all containers in the chart release | ["sleep"] |
diagnosticMode.args | Args to override all containers in the chart release | ["infinity"] |
configuration | Specify content for Cilium common configuration (basic one auto-generated based on other values otherwise) | {} |
overrideConfiguration | Cilium common configuration override. Values defined here takes precedence over the ones defined at configuration | {} |
existingConfigmap | The name of an existing ConfigMap with your custom Cilium configuration | "" |
clusterName | Name of the Cilium cluster | default |
azure.enabled | Enable Azure integration | false |
azure.resourceGroup | When enabling Azure integration, set the Azure Resource Group | "" |
azure.tenantID | When enabling Azure integration, set the Azure Tenant ID | "" |
azure.subscriptionID | When enabling Azure integration, set the Azure Subscription ID | "" |
azure.clientID | When enabling Azure integration, set the Azure Client ID | "" |
azure.clientSecret | When enabling Azure integration, set the Azure Client Secret | "" |
aws.enabled | Enable AWS integration | false |
aws.region | When enabling AWS integration, set the AWS region | "" |
aws.accessKeyID | When enabling AWS integration, set the AWS Access Key ID | "" |
aws.secretAccessKey | When enabling AWS integration, set the AWS Secret Access Key | "" |
gcp.enabled | Enable GCP integration | false |
Name | Description | Value |
---|---|---|
agent.image.registry | Cilium Agent image registry | REGISTRY_NAME |
agent.image.repository | Cilium Agent image repository | REPOSITORY_NAME/cilium |
agent.image.digest | Cilium Agent image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag image tag (immutable tags are recommended) | "" |
agent.image.pullPolicy | Cilium Agent image pull policy | IfNotPresent |
agent.image.pullSecrets | Cilium Agent image pull secrets | [] |
agent.image.debug | Enable Cilium Agent image debug mode | false |
agent.containerPorts.health | Cilium Agent health container port | 9879 |
agent.containerPorts.pprof | Cilium Agent pprof container port |
Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/cilium/README.md