Introducing our new CEO Don Johnson - Read More

bitnamicharts/cilium

Verified Publisher

By VMware

Updated 1 day ago

Bitnami Helm chart for Cilium

Helm
Image
Monitoring & Observability
Networking
0

50K+

Bitnami package for Cilium

Cilium is an eBPF-based networking, observability, and security for Linux container management platforms like Docker and Kubernetes.

Overview of Cilium

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/cilium

Looking to use Cilium in production? Try VMware Tanzu Application Catalog, the enterprise edition of Bitnami Application Catalog.

Introduction

Bitnami charts for Helm are carefully engineered, actively maintained and are the quickest and easiest way to deploy containers on a Kubernetes cluster that are ready to handle production workloads.

This chart bootstraps a Cilium deployment in a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • Nodes with Linux kernel >= 4.19.57 or equivalent (e.g., 4.18 on RHEL8)

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/cilium

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Cilium on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

External Key-Value Store support

You may want to have Cilium connect to an external key-value store rather than installing one inside your cluster. Typical reasons for this are to use a managed service, or to share a common store for all your applications. To achieve this, the chart allows you to specify credentials for an external database with the externalKvstore parameter. You should also disable the etcd installation with the etcd.enabled option. Here is an example:

etcd.enabled=false
externalKvstorehost.enabled=true
externalKvstorehost.endpoints[0]=external-kvstore-host-0:2379
externalKvstorehost.endpoints[1]=external-kvstore-host-1:2379
Cilium CNI plugin

Please also note the chart installs the Cilium CNI plugin on the Kubernetes nodes by default. If you want to disable this behavior, set the agent.cniPlugin.install parameter to false.

It's also necessary to know the paths where the CNI binary and configuration files are located in your Kubernetes nodes. The chart assumes that the CNI binary is located in the /opt/cni/bin directory and the CNI configuration files are located in the /etc/cni/net.d directory. You can customize these paths using the agent.cniPlugin.hostCNIBinDir and agent.cniPlugin.hostCNINetDir parameters.

Securing traffic using TLS

This chart supports encrypting communications between Hubble components using TLS. To enable this feature, set the hubble.tls.enabled.

It is necessary to create a secret containing the TLS certificates and pass it to the chart via the hubble.tls.existingCASecret, hubble.tls.peers.existingSecret, hubble.tls.relay.existingSecret and hubble.tls.client.existingSecret parameters. Every secret should contain a tls.crt and tls.key keys including the certificate and key files respectively. For example: create the CA secret with the certificates files:

kubectl create secret generic ca-tls-secret --from-file=./tls.crt --from-file=./tls.key

You can manually create the required TLS certificates or relying on the chart auto-generation capabilities. The chart supports two different ways to auto-generate the required certificates:

  • Using Helm capabilities. Enable this feature by setting hubble.tls.autoGenerated.enabled to true and hubble.tls.autoGenerated.engine to helm.
  • Relying on CertManager (please note it's required to have CertManager installed in your K8s cluster). Enable this feature by setting hubble.tls.autoGenerated.enabled to true and hubble.tls.autoGenerated.engine to cert-manager. Please note it's supported to use an existing Issuer/ClusterIssuer for issuing the TLS certificates by setting the hubble.tls.autoGenerated.certManager.existingIssuer and hubble.tls.autoGenerated.certManager.existingIssuerKind parameters.
Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Prometheus metrics

This chart can be integrated with Prometheus by setting *.metrics.enabled (under the agent, envoy, operator, hubble.peers and hubble.relay sections) to true. This will expose the Cilium, Hubble and Envoy native Prometheus ports in the containers. Additionally, it will deploy several metrics services, which can be configured under the *.metrics.service section (under the agent, envoy, operator, hubble.peers and hubble.relay sections). These metrics services will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value *.metrics.serviceMonitor.enabled=true (under the agent, envoy, operator, hubble.peers and hubble.relay sections). Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Ingress

This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve Hubble UI. To enable Ingress integration, set hubble.ui.enabled and hubble.ui.ingress.enabled to true.

The most common scenario is to have one host name mapped to the deployment. In this case, the hubble.ui.ingress.hostname property can be used to set the host name. The ui.ingress.tls parameter can be used to add the TLS configuration for this host.

However, it is also possible to have more than one host. To facilitate this, the hubble.ui.ingress.extraHosts parameter (if available) can be set with the host names specified as an array. The hubble.ui.ingress.extraTLS parameter (if available) can also be used to add the TLS configuration for extra hosts.

NOTE: For each host specified in the hubble.ui.ingress.extraHosts parameter, it is necessary to set a name, path, and any annotations that the Ingress controller should know about. Not all annotations are supported by all Ingress controllers, but this annotation reference document lists the annotations supported by many popular Ingress controllers.

Adding the TLS parameter (where available) will cause the chart to generate HTTPS URLs, and the application will be available on port 443. The actual TLS secrets do not have to be generated by this chart. However, if TLS is enabled, the Ingress record will not work until the TLS secret exists.

Learn more about Ingress controllers.

Additional environment variables

In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars property. For instance:

agent:
  extraEnvVars:
    - name: LOG_LEVEL
      value: error

Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM or the extraEnvVarsSecret values.

Sidecars

If additional containers are needed in the same pod as Ciliuma (such as additional metrics or logging exporters), they can be defined using the agent.sidecars parameter.

agent:
  sidecars:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
    - name: portname
      containerPort: 1234

If these sidecars export extra ports, extra port definitions can be added using the agent.service.extraPorts parameter (where available), as shown in the example below:

agent:
  service:
    extraPorts:
    - name: extraPort
      port: 11311
      targetPort: 11311

If additional init containers are needed in the same pod, they can be defined using the agent.initContainers parameter. Here is an example:

agent:
  initContainers:
  - name: your-image-name
    image: your-image
    imagePullPolicy: Always
    ports:
      - name: portname
        containerPort: 1234

Learn more about sidecar containers and init containers.

Pod affinity

This chart allows you to set your custom affinity using the affinity parameter. Find more information about Pod affinity in the kubernetes documentation.

As an alternative, use one of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset, podAntiAffinityPreset, or nodeAffinityPreset parameters.

Parameters

Global parameters
NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto
Common parameters
NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.name""
fullnameOverrideString to fully override common.names.fullname""
namespaceOverrideString to fully override common.names.namespace""
commonLabelsLabels to add to all deployed objects{}
commonAnnotationsAnnotations to add to all deployed objects{}
clusterDomainKubernetes cluster domain namecluster.local
extraDeployArray of extra objects to deploy with the release[]
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the chart release["sleep"]
diagnosticMode.argsArgs to override all containers in the chart release["infinity"]
configurationSpecify content for Cilium common configuration (basic one auto-generated based on other values otherwise){}
overrideConfigurationCilium common configuration override. Values defined here takes precedence over the ones defined at configuration{}
existingConfigmapThe name of an existing ConfigMap with your custom Cilium configuration""
clusterNameName of the Cilium clusterdefault
azure.enabledEnable Azure integrationfalse
azure.resourceGroupWhen enabling Azure integration, set the Azure Resource Group""
azure.tenantIDWhen enabling Azure integration, set the Azure Tenant ID""
azure.subscriptionIDWhen enabling Azure integration, set the Azure Subscription ID""
azure.clientIDWhen enabling Azure integration, set the Azure Client ID""
azure.clientSecretWhen enabling Azure integration, set the Azure Client Secret""
aws.enabledEnable AWS integrationfalse
aws.regionWhen enabling AWS integration, set the AWS region""
aws.accessKeyIDWhen enabling AWS integration, set the AWS Access Key ID""
aws.secretAccessKeyWhen enabling AWS integration, set the AWS Secret Access Key""
gcp.enabledEnable GCP integrationfalse
Cilium Agent Parameters
NameDescriptionValue
agent.image.registryCilium Agent image registryREGISTRY_NAME
agent.image.repositoryCilium Agent image repositoryREPOSITORY_NAME/cilium
agent.image.digestCilium Agent image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag image tag (immutable tags are recommended)""
agent.image.pullPolicyCilium Agent image pull policyIfNotPresent
agent.image.pullSecretsCilium Agent image pull secrets[]
agent.image.debugEnable Cilium Agent image debug modefalse
agent.containerPorts.healthCilium Agent health container port9879
agent.containerPorts.pprofCilium Agent pprof container port

Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/cilium/README.md

Docker Pull Command

docker pull bitnamicharts/cilium
Bitnami