bitnamicharts/clickhouse
Bitnami Helm chart for ClickHouse
1M+
ClickHouse is an open-source column-oriented OLAP database management system. Use it to boost your database performance while providing linear scalability and hardware efficiency.
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
helm install my-release oci://registry-1.docker.io/bitnamicharts/clickhouse
Looking to use ClickHouse in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
Bitnami charts for Helm are carefully engineered, actively maintained and are the quickest and easiest way to deploy containers on a Kubernetes cluster that are ready to handle production workloads.
This chart bootstraps a ClickHouse Deployment in a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
If you are using Kubernetes 1.18, the following code needs to be commented out. seccompProfile: type: "RuntimeDefault"
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/clickhouse
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The command deploys ClickHouse on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
This chart can be integrated with Prometheus by setting metrics.enabled
to true
. This will expose Clickhouse native Prometheus endpoint in the service. It will have the necessary annotations to be automatically scraped by Prometheus.
Prometheus requirements
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
Integration with Prometheus Operator
The chart can deploy ServiceMonitor
objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true
. Ensure that the Prometheus Operator CustomResourceDefinitions
are installed in the cluster or it will fail with the following error:
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
Bitnami charts configure credentials at first boot. Any further change in the secrets or credentials require manual intervention. Follow these instructions:
kubectl create secret generic SECRET_NAME --from-literal=admin-password=PASSWORD --dry-run -o yaml | kubectl apply -f -
You can set keeper.enabled
to use ClickHouse keeper. If keeper.enabled=true
, Zookeeper settings will be ignore.
You may want to have ClickHouse connect to an external zookeeper rather than installing one inside your cluster. Typical reasons for this are to use a managed database service, or to share a common database server for all your applications. To achieve this, the chart allows you to specify credentials for an external database with the externalZookeeper
parameter. You should also disable the Zookeeper installation with the zookeeper.enabled
option. Here is an example:
zookeper.enabled=false
externalZookeeper.host=myexternalhost
externalZookeeper.user=myuser
externalZookeeper.password=mypassword
externalZookeeper.database=mydatabase
externalZookeeper.port=3306
For using ingress (example without TLS):
ingress:
## If true, ClickHouse server Ingress will be created
##
enabled: true
## ClickHouse server Ingress annotations
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: 'true'
## ClickHouse server Ingress hostnames
## Must be provided if Ingress is enabled
##
hosts:
- clickhouse.domain.com
If your cluster allows automatic creation/retrieval of TLS certificates, please refer to the documentation for that mechanism.
To manually configure TLS, first create/retrieve a key & certificate pair for the address(es) you wish to protect. Then create a TLS secret (named clickhouse-server-tls
in this example) in the namespace. Include the secret's name, along with the desired hostnames, in the Ingress TLS section of your custom values.yaml
file:
ingress:
## If true, ClickHouse server Ingress will be created
##
enabled: true
## ClickHouse server Ingress annotations
##
annotations: {}
# kubernetes.io/ingress.class: nginx
# kubernetes.io/tls-acme: 'true'
## ClickHouse server Ingress hostnames
## Must be provided if Ingress is enabled
##
hosts:
- clickhouse.domain.com
## ClickHouse server Ingress TLS configuration
## Secrets must be manually created in the namespace
##
tls:
- secretName: clickhouse-server-tls
hosts:
- clickhouse.domain.com
This chart facilitates the creation of TLS secrets for use with the Ingress controller (although this is not mandatory). There are several common use cases:
In the first two cases, a certificate and a key are needed. Files are expected in .pem
format.
Here is an example of a certificate file:
NOTE: There may be more than one certificate if there is a certificate chain.
-----BEGIN CERTIFICATE-----
MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
...
jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
-----END CERTIFICATE-----
Here is an example of a certificate key:
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
...
wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
-----END RSA PRIVATE KEY-----
certificate
and key
values for a given *.ingress.secrets
entry.INGRESS_HOSTNAME-tls
(where INGRESS_HOSTNAME is a placeholder to be replaced with the hostname you set using the *.ingress.hostname
parameter).*.ingress.annotations
the corresponding ones for cert-manager.*.ingress.tls
and *.ingress.selfSigned
to true
.In case you want to add extra environment variables (useful for advanced operations like custom init scripts), you can use the extraEnvVars
property.
clickhouse:
extraEnvVars:
- name: LOG_LEVEL
value: error
Alternatively, you can use a ConfigMap or a Secret with the environment variables. To do so, use the extraEnvVarsCM
or the extraEnvVarsSecret
values.
If additional containers are needed in the same pod as ClickHouse (such as additional metrics or logging exporters), they can be defined using the sidecars
parameter.
sidecars:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
If these sidecars export extra ports, extra port definitions can be added using the service.extraPorts
parameter (where available), as shown in the example below:
service:
extraPorts:
- name: extraPort
port: 11311
targetPort: 11311
NOTE: This Helm chart already includes sidecar containers for the Prometheus exporters (where applicable). These can be activated by adding the
--enable-metrics=true
parameter at deployment time. Thesidecars
parameter should therefore only be used for any extra sidecar containers.
If additional init containers are needed in the same pod, they can be defined using the initContainers
parameter. Here is an example:
initContainers:
- name: your-image-name
image: your-image
imagePullPolicy: Always
ports:
- name: portname
containerPort: 1234
Learn more about sidecar containers and init containers.
For advanced operations, the Bitnami ClickHouse chart allows using custom init and start scripts that will be mounted in /docker-entrypoint.initdb.d
and /docker-entrypoint.startdb.d
. The init
scripts will be run on the first boot whereas the start
scripts will be run on every container start. For adding the scripts directly as values use the initdbScripts
and startdbScripts
values. For using Secrets use the initdbScriptsSecret
and startdbScriptsSecret
.
initdbScriptsSecret: init-scripts-secret
startdbScriptsSecret: start-scripts-secret
This chart allows you to set your custom affinity using the affinity
parameter. Find more information about Pod affinity in the kubernetes documentation.
As an alternative, use one of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the podAffinityPreset
, podAntiAffinityPreset
, or nodeAffinityPreset
parameters.
To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
The Bitnami ClickHouse image stores the ClickHouse data and configurations at the /bitnami
path of the container. Persistent Volume Claims are used to keep the data across deployments. This is known to work in GCE, AWS, and minikube.
Name | Description | Value |
---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass | DEPRECATED: use global.defaultStorageClass instead | "" |
global.security.allowInsecureImages | Allows skipping image verification | false |
global.compatibility.openshift.adaptSecurityContext | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion | Override Kubernetes version | "" |
nameOverride | String to partially override common.names.name | "" |
fullnameOverride | String to fully override common.names.fullname | "" |
namespaceOverride | String to fully override common.names.namespace | "" |
commonLabels | Labels to add to all deployed objects | {} |
commonAnnotations | Annotations to add to all deployed objects | {} |
clusterDomain | Kubernetes cluster domain name | cluster.local |
extraDeploy | Array of extra objects to deploy with the release | [] |
diagnosticMode.enabled | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command | Command to override all containers in the deployment | ["sleep"] |
diagnosticMode.args | Args to override all containers in the deployment | ["infinity"] |
Name | Description | Value |
---|---|---|
image.registry | ClickHouse image registry | REGISTRY_NAME |
image.repository | ClickHouse image repository | REPOSITORY_NAME/clickhouse |
image.digest | ClickHouse image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
image.pullPolicy | ClickHouse image pull policy | IfNotPresent |
image.pullSecrets | ClickHouse image pull secrets | [] |
image.debug | Enable ClickHouse image debug mode | false |
clusterName | ClickHouse cluster name | default |
shards | Number of ClickHouse shards to deploy | 2 |
replicaCount | Number of ClickHouse replicas per shard to deploy | 3 |
distributeReplicasByZone | Schedules replicas of the same shard to different availability zones | false |
containerPorts.http | ClickHouse HTTP container port | 8123 |
containerPorts.https | ClickHouse HTTPS container port | 8443 |
containerPorts.tcp | ClickHouse TCP container port | 9000 |
containerPorts.tcpSecure | ClickHouse TCP (secure) container port | 9440 |
containerPorts.keeper | ClickHouse keeper TCP container port | 2181 |
containerPorts.keeperSecure | ClickHouse keeper TCP (secure) container port | 3181 |
containerPorts.keeperInter | ClickHouse keeper interserver TCP container port | 9444 |
containerPorts.mysql | ClickHouse MySQL |
Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/clickhouse/README.md