bitnamicharts/valkey

Verified Publisher

By VMware

Updated 3 days ago

Bitnami Helm chart for Valkey

Helm
Image
Databases & Storage
Monitoring & Observability

500K+

Bitnami package for Valkey

Valkey is an open source (BSD) high-performance key/value datastore that supports a variety workloads such as caching, message queues, and can act as a primary database.

Overview of Valkey

Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/valkey

Looking to use Valkey in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.

Introduction

This chart bootstraps a Valkey deployment on a Kubernetes cluster using the Helm package manager.

Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • PV provisioner support in the underlying infrastructure

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/valkey

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Valkey on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.

Tip: List all releases using helm list

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Update credentials

The Bitnami Valkey chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in auth.existingSecret. To update credentials, use one of the following:

  • Run helm upgrade specifying a new password in auth.password
  • Run helm upgrade specifying a new secret in auth.existingSecret
Prometheus metrics

This chart can be integrated with Prometheus by setting metrics.enabled to true. This will deploy a sidecar container with redis_exporter in all pods and a metrics service, which can be configured under the metrics.service section. This metrics service will have the necessary annotations to be automatically scraped by Prometheus.

Prometheus requirements

It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.

Integration with Prometheus Operator

The chart can deploy ServiceMonitor objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true. Ensure that the Prometheus Operator CustomResourceDefinitions are installed in the cluster or it will fail with the following error:

no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"

Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.

Rolling VS Immutable tags

It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.

Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.

Use a different Valkey version

To modify the application version used in this chart, specify a different version of the image using the image.tag parameter and/or a different repository using the image.repository parameter.

Bootstrapping with an External Cluster

This chart is equipped with the ability to bring online a set of Pods that connect to an existing Valkey deployment that lies outside of Kubernetes. This effectively creates a hybrid Valkey Deployment where both Pods in Kubernetes and Instances such as Virtual Machines can partake in a single Valkey Deployment. This is helpful in situations where one may be migrating Valkey from Virtual Machines into Kubernetes, for example. To take advantage of this, use the following as an example configuration:

replica:
  externalPrimary:
    enabled: true
    host: external-valkey-0.internal
sentinel:
  externalPrimary:
    enabled: true
    host: external-valkey-0.internal

:warning: This is currently limited to clusters in which Sentinel and Valkey run on the same node! :warning:

Please also note that the external sentinel must be listening on port 26379, and this is currently not configurable.

Once the Kubernetes Valkey Deployment is online and confirmed to be working with the existing cluster, the configuration can then be removed and the cluster will remain connected.

External DNS

This chart is equipped to allow leveraging the ExternalDNS project. Doing so will enable ExternalDNS to publish the FQDN for each instance, in the format of <pod-name>.<release-name>.<dns-suffix>. Example, when using the following configuration:

useExternalDNS:
  enabled: true
  suffix: prod.example.org
  additionalAnnotations:
    ttl: 10

On a cluster where the name of the Helm release is a, the hostname of a Pod is generated as: a-valkey-node-0.a-valkey.prod.example.org. The IP of that FQDN will match that of the associated Pod. This modifies the following parameters of the Valkey/Sentinel configuration using this new FQDN:

  • replica-announce-ip
  • known-sentinel
  • known-replica
  • announce-ip

:warning: This requires a working installation of external-dns to be fully functional. :warning:

See the official ExternalDNS documentation for additional configuration options.

Cluster topologies

Default: Primary-Replicas

When installing the chart with architecture=replication, it will deploy a Valkey primary StatefulSet and a Valkey replicas StatefulSet. The replicas will be read-replicas of the primary. Two services will be exposed:

  • Valkey Primary service: Points to the primary, where read-write operations can be performed
  • Valkey Replicas service: Points to the replicas, where only read operations are allowed by default.

In case the primary crashes, the replicas will wait until the primary node is respawned again by the Kubernetes Controller Manager.

Standalone

When installing the chart with architecture=standalone, it will deploy a standalone Valkey StatefulSet. A single service will be exposed:

  • Valkey Primary service: Points to the primary, where read-write operations can be performed

Primary-Replicas with Sentinel

When installing the chart with architecture=replication and sentinel.enabled=true, it will deploy a Valkey primary StatefulSet (only one primary allowed) and a Valkey replicas StatefulSet. In this case, the pods will contain an extra container with Valkey Sentinel. This container will form a cluster of Valkey Sentinel nodes, which will promote a new primary in case the actual one fails.

On graceful termination of the Valkey primary pod, a failover of the primary is initiated to promote a new primary. The Valkey Sentinel container in this pod will wait for the failover to occur before terminating. If sentinel.valkeyShutdownWaitFailover=true is set (the default), the Valkey container will wait for the failover as well before terminating. This increases availability for reads during failover, but may cause stale reads until all clients have switched to the new primary.

In addition to this, only one service is exposed:

  • Valkey service: Exposes port 6379 for Valkey read-only operations and port 26379 for accessing Valkey Sentinel.

For read-only operations, access the service using port 6379. For write operations, it's necessary to access the Valkey Sentinel cluster and query the current primary using the command below (using valkey-cli or similar):

SENTINEL get-primary-addr-by-name <name of your PrimarySet. e.g: myprimary>

This command will return the address of the current primary, which can be accessed from inside the cluster.

In case the current primary crashes, the Sentinel containers will elect a new primary node.

primary.replicaCount greater than 1 is not designed for use when sentinel.enabled=true.

Multiple primary nodes (experimental)

When primary.replicaCount is greater than 1, special care must be taken to create a consistent setup.

An example of use case is the creation of a redundant set of standalone primary nodes or primary-replicas per Kubernetes node where you must ensure:

  • No more than 1 primary can be deployed per Kubernetes node
  • Replicas and writers can only see the single primary of their own Kubernetes node

One way of achieving this is by setting primary.service.internalTrafficPolicy=Local in combination with a primary.affinity.podAntiAffinity spec to never schedule more than one primary per Kubernetes node.

It's recommended to only change primary.replicaCount if you know what you are doing. primary.replicaCount greater than 1 is not designed for use when sentinel.enabled=true.

Using a password file

To use a password file for Valkey you need to create a secret containing the password and then deploy the chart using that secret. Follow these instructions:

  • Create the secret with the password. It is important that the file with the password must be called valkey-password.
kubectl create secret generic valkey-password-secret --from-file=valkey-password.yaml
  • Deploy the Helm Chart using the secret name as parameter:
usePassword=true
usePasswordFile=true
existingSecret=valkey-password-secret
sentinels.enabled=true
metrics.enabled=true
Securing traffic using TLS

TLS support can be enabled in the chart by specifying the tls. parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the cluster:

  • tls.enabled: Enable TLS support. Defaults to false
  • tls.existingSecret: Name of the secret that contains the certificates. No defaults.
  • tls.certFilename: Certificate filename. No defaults.
  • tls.certKeyFilename: Certificate key filename. No defaults.
  • tls.certCAFilename: CA Certificate filename. No defaults.

For example:

First, create the secret with the certificates files:

kubectl create secret generic certificates-tls-secret --from-file=./cert.pem --from-file=./cert.key --from-file=./ca.pem

Then, use the following parameters:

tls.enabled="true"
tls.existingSecret="certificates-tls-secret"
tls.certFilename="cert.pem"
tls.certKeyFilename="cert.key"
tls.certCAFilename="ca.pem"
Metrics

The chart optionally can start a metrics exporter for prometheus. The metrics endpoint (port 9121) is exposed in the service. Metrics can be scraped from within the cluster using something similar as the described in the example Prometheus scrape configuration. If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.

If you have enabled TLS by specifying tls.enabled=true you also need to specify TLS option to the metrics exporter. You can do that via metrics.extraArgs. You can find the metrics exporter CLI flags for TLS here. For example:

You can either specify metrics.extraArgs.skip-tls-verification=true to skip TLS verification or providing the following values under metrics.extraArgs for TLS client authentication:

tls-client-key-file
tls-client-cert-file
tls-ca-cert-file
Deploy a custom metrics script in the sidecar

A custom Lua script can be added to the redis-exporter sidecar by way of the metrics.extraArgs.script parameter. The pathname of the script must exist on the container, or the redis_exporter process (and therefore the whole pod) will refuse to start. The script can be provided to the sidecar containers via the metrics.extraVolumes and metrics.extraVolumeMounts parameters:

metrics:
  extraVolumeMounts:
    - name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
      mountPath: '{{ printf "/mnt/%s/" (include "common.names.name" .) }}'
      readOnly: true
  extraVolumes:
    - name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
      configMap:
        name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
  extraArgs:
    script: '{{ printf "/mnt/%s/my_custom_metrics.lua" (include "common.names.name" .) }}'

Then deploy the script into the correct location via extraDeploy:

extraDeploy:
  - apiVersion: v1
    kind: ConfigMap
    metadata:
      name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
    data:
      my_custom_metrics.lua: |
        -- LUA SCRIPT CODE HERE, e.g.,
        return {'bitnami_makes_the_best_charts', '1'}
Host Kernel Settings

Valkey may require some changes in the kernel of the host machine to work as expected, in particular increasing the somaxconn value and disabling transparent huge pages. To do so, you can set securityContext.sysctls which will configure sysctls for primary and replica pods. Example:

securityContext:
  sysctls:
  - name: net.core.somaxconn
    value: "10000"

Note that this will not disable transparent huge tables.

Backup and restore

To backup and restore Valkey deployments on Kubernetes, you will need to create a snapshot of the data in the source cluster, and later restore it in a new cluster with the new parameters. Follow the instructions below:

Step 1: Backup the deployment

  • Connect to one of the nodes and start the Valkey CLI tool. Then, run the commands below:

    $ kubectl exec -it my-release-primary-0 bash
    $ valkey-cli
    127.0.0.1:6379> auth your_current_valkey_password
    OK
    127.0.0.1:6379> save
    OK
    
  • Copy the dump file from the Valkey node:

    kubectl cp my-release-primary-0:/data/dump.rdb dump.rdb -c valkey
    

Step 2: Restore the data on the destination cluster

To restore the data in a new cluster, you will need to create a PVC and then upload the dump.rdb file to the new volume.

Follow the following steps:

  • In the values.yaml file set the appendonly parameter to no. You can skip this step if it is already configured as no

    commonConfiguration: |-
       # Enable AOF https://valkey.io/topics/persistence#append-only-file
       appendonly no
       # Disable RDB persistence, AOF persistence already enabled.
       save ""
    

    Note that the Enable AOF comment belongs to the original config file and what you're actually doing is disabling it. This change will only be neccessary for the temporal cluster you're creating to upload the dump.

  • Start the new cluster to create the PVCs. Use the command below as an example:

    helm install new-valkey  -f values.yaml .  --set cluster.enabled=true  --set cluster.replicaCount=3
    
  • Now that the PVC were created, stop it and copy the dump.rdp file on the persisted data by using a helping pod.

    $ helm delete new-valkey
    
    $ kubectl run --generator=run-pod/v1 -i --rm --tty volpod --overrides='
    {
        "apiVersion": "v1",
        "kind": "Pod",
        "metadata": {
            "name": "valkeyvolpod"
        },
        "spec": {
            "containers": [{
               "command": [
                    "tail",
                    "-f",
                    "/dev/null"
               ],
               "image": "bitnami/os-shell",
               "name": "mycontainer",
               "volumeMounts": [{
                   "mountPath": "/mnt",
                   "name": "valkeydata"
                }]
            }],
            "restartPolicy": "Never",
            "volumes": [{
                "name": "valkeydata",
                "persistentVolumeClaim": {
                    "claimName": "valkey-data-new-valkey-primary-0"
                }
            }]
        }
    }' --image="bitnami/os-shell"
    
    $ kubectl cp dump.rdb valkeyvolpod:/mnt/dump.rdb
    $ kubectl delete pod volpod
    
  • Restart the cluster:

    INFO: The appendonly parameter can be safely restored to your desired value.

    helm install new-valkey  -f values.yaml .  --set cluster.enabled=true  --set cluster.replicaCount=3
    
NetworkPolicy

To enable network policy for Valkey, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled to true.

With NetworkPolicy enabled, only pods with the generated client label will be able to connect to Valkey. This label will be displayed in the output after a successful install.

With networkPolicy.ingressNSMatchLabels pods from other namespaces can connect to Valkey. Set networkPolicy.ingressNSPodMatchLabels to match pod labels in matched namespace. For example, for a namespace labeled valkey=external and pods in that namespace labeled valkey-client=true the fields should be set:

networkPolicy:
  enabled: true
  ingressNSMatchLabels:
    valkey: external
  ingressNSPodMatchLabels:
    valkey-client: true

Setting Pod's affinity

This chart allows you to set your custom affinity using the XXX.affinity parameter(s). Find more information about Pod's affinity in the Kubernetes documentation.

As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset, XXX.podAntiAffinityPreset, or XXX.nodeAffinityPreset parameters.

Persistence

By default, the chart mounts a Persistent Volume at the /data path. The volume is created using dynamic volume provisioning. If a Persistent Volume Claim already exists, specify it during installation.

Existing PersistentVolumeClaim
  1. Create the PersistentVolume
  2. Create the PersistentVolumeClaim
  3. Install the chart
helm install my-release --set primary.persistence.existingClaim=PVC_NAME oci://REGISTRY_NAME/REPOSITORY_NAME/valkey

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

Parameters

Global parameters
NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.valkey.passwordGlobal Valkey password (overrides auth.password)""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto
Common parameters

| Name | Description

Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/valkey/README.md

Docker Pull Command

docker pull bitnamicharts/valkey
Bitnami