bitnamicharts/redis
Bitnami Helm chart for Redis(R)
10M+
Redis(R) is an open source, advanced key-value store. It is often referred to as a data structure server since keys can contain strings, hashes, lists, sets and sorted sets.
Disclaimer: Redis is a registered trademark of Redis Ltd. Any rights therein are reserved to Redis Ltd. Any use by Bitnami is for referential purposes only and does not indicate any sponsorship, endorsement, or affiliation between Redis Ltd.
helm install my-release oci://registry-1.docker.io/bitnamicharts/redis
Looking to use Redis® in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
This chart bootstraps a Redis® deployment on a Kubernetes cluster using the Helm package manager.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
You can choose any of the two Redis® Helm charts for deploying a Redis® cluster.
The main features of each chart are the following:
Redis® | Redis® Cluster |
---|---|
Supports multiple databases | Supports only one database. Better if you have a big dataset |
Single write point (single master) | Multiple write points (multiple masters) |
![]() | ![]() |
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/redis
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The command deploys Redis® on the Kubernetes cluster in the default configuration. The Parameters section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
This chart can be integrated with Prometheus by setting metrics.enabled
to true
. This will deploy a sidecar container with redis_exporter in all pods and a metrics
service, which can be configured under the metrics.service
section. This metrics
service will have the necessary annotations to be automatically scraped by Prometheus.
Prometheus requirements
It is necessary to have a working installation of Prometheus or Prometheus Operator for the integration to work. Install the Bitnami Prometheus helm chart or the Bitnami Kube Prometheus helm chart to easily have a working Prometheus in your cluster.
Integration with Prometheus Operator
The chart can deploy ServiceMonitor
objects for integration with Prometheus Operator installations. To do so, set the value metrics.serviceMonitor.enabled=true
. Ensure that the Prometheus Operator CustomResourceDefinitions
are installed in the cluster or it will fail with the following error:
no matches for kind "ServiceMonitor" in version "monitoring.coreos.com/v1"
Install the Bitnami Kube Prometheus helm chart for having the necessary CRDs and the Prometheus Operator.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
To modify the application version used in this chart, specify a different version of the image using the image.tag
parameter and/or a different repository using the image.repository
parameter.
This chart is equipped with the ability to bring online a set of Pods that connect to an existing Redis deployment that lies outside of Kubernetes. This effectively creates a hybrid Redis Deployment where both Pods in Kubernetes and Instances such as Virtual Machines can partake in a single Redis Deployment. This is helpful in situations where one may be migrating Redis from Virtual Machines into Kubernetes, for example. To take advantage of this, use the following as an example configuration:
replica:
externalMaster:
enabled: true
host: external-redis-0.internal
sentinel:
externalMaster:
enabled: true
host: external-redis-0.internal
:warning: This is currently limited to clusters in which Sentinel and Redis run on the same node! :warning:
Please also note that the external sentinel must be listening on port 26379
, and this is currently not configurable.
Once the Kubernetes Redis Deployment is online and confirmed to be working with the existing cluster, the configuration can then be removed and the cluster will remain connected.
This chart is equipped to allow leveraging the ExternalDNS project. Doing so will enable ExternalDNS to publish the FQDN for each instance, in the format of <pod-name>.<release-name>.<dns-suffix>
.
Example, when using the following configuration:
useExternalDNS:
enabled: true
suffix: prod.example.org
additionalAnnotations:
ttl: 10
On a cluster where the name of the Helm release is a
, the hostname of a Pod is generated as: a-redis-node-0.a-redis.prod.example.org
. The IP of that FQDN will match that of the associated Pod. This modifies the following parameters of the Redis/Sentinel configuration using this new FQDN:
replica-announce-ip
known-sentinel
known-replica
announce-ip
:warning: This requires a working installation of external-dns
to be fully functional. :warning:
See the official ExternalDNS documentation for additional configuration options.
Default: Master-Replicas
When installing the chart with architecture=replication
, it will deploy a Redis® master StatefulSet and a Redis® replicas StatefulSet. The replicas will be read-replicas of the master. Two services will be exposed:
In case the master crashes, the replicas will wait until the master node is respawned again by the Kubernetes Controller Manager.
Standalone
When installing the chart with architecture=standalone
, it will deploy a standalone Redis® StatefulSet. A single service will be exposed:
Master-Replicas with Sentinel
When installing the chart with architecture=replication
and sentinel.enabled=true
, it will deploy a Redis® master StatefulSet (only one master allowed) and a Redis® replicas StatefulSet. In this case, the pods will contain an extra container with Redis® Sentinel. This container will form a cluster of Redis® Sentinel nodes, which will promote a new master in case the actual one fails.
On graceful termination of the Redis® master pod, a failover of the master is initiated to promote a new master. The Redis® Sentinel container in this pod will wait for the failover to occur before terminating. If sentinel.redisShutdownWaitFailover=true
is set (the default), the Redis® container will wait for the failover as well before terminating. This increases availability for reads during failover, but may cause stale reads until all clients have switched to the new master.
In addition to this, only one service is exposed:
For read-only operations, access the service using port 6379. For write operations, it's necessary to access the Redis® Sentinel cluster and query the current master using the command below (using redis-cli or similar):
SENTINEL get-master-addr-by-name <name of your MasterSet. e.g: mymaster>
This command will return the address of the current master, which can be accessed from inside the cluster.
In case the current master crashes, the Sentinel containers will elect a new master node.
master.count
greater than 1
is not designed for use when sentinel.enabled=true
.
When master.count
is greater than 1
, special care must be taken to create a consistent setup.
An example of use case is the creation of a redundant set of standalone masters or master-replicas per Kubernetes node where you must ensure:
1
master can be deployed per Kubernetes nodeOne way of achieving this is by setting master.service.internalTrafficPolicy=Local
in combination with a master.affinity.podAntiAffinity
spec to never schedule more than one master per Kubernetes node.
It's recommended to only change master.count
if you know what you are doing.
master.count
greater than 1
is not designed for use when sentinel.enabled=true
.
The Bitnami Redis chart, when upgrading, reuses the secret previously rendered by the chart or the one specified in auth.existingSecret
. To update credentials, use one of the following:
helm upgrade
specifying a new password in auth.password
helm upgrade
specifying a new secret in auth.existingSecret
To use a password file for Redis® you need to create a secret containing the password and then deploy the chart using that secret. Follow these instructions:
redis-password
.kubectl create secret generic redis-password-secret --from-file=redis-password.yaml
usePassword=true
usePasswordFiles=true
existingSecret=redis-password-secret
sentinels.enabled=true
metrics.enabled=true
TLS support can be enabled in the chart by specifying the tls.
parameters while creating a release. The following parameters should be configured to properly enable the TLS support in the cluster:
tls.enabled
: Enable TLS support. Defaults to false
tls.existingSecret
: Name of the secret that contains the certificates. No defaults.tls.certFilename
: Certificate filename. No defaults.tls.certKeyFilename
: Certificate key filename. No defaults.tls.certCAFilename
: CA Certificate filename. No defaults.For example:
First, create the secret with the certificates files:
kubectl create secret generic certificates-tls-secret --from-file=./cert.pem --from-file=./cert.key --from-file=./ca.pem
Then, use the following parameters:
tls.enabled="true"
tls.existingSecret="certificates-tls-secret"
tls.certFilename="cert.pem"
tls.certKeyFilename="cert.key"
tls.certCAFilename="ca.pem"
The chart optionally can start a metrics exporter for prometheus. The metrics endpoint (port 9121) is exposed in the service. Metrics can be scraped from within the cluster using something similar as the described in the example Prometheus scrape configuration. If metrics are to be scraped from outside the cluster, the Kubernetes API proxy can be utilized to access the endpoint.
If you have enabled TLS by specifying tls.enabled=true
you also need to specify TLS option to the metrics exporter. You can do that via metrics.extraArgs
. You can find the metrics exporter CLI flags for TLS here. For example:
You can either specify metrics.extraArgs.skip-tls-verification=true
to skip TLS verification or providing the following values under metrics.extraArgs
for TLS client authentication:
tls-client-key-file
tls-client-cert-file
tls-ca-cert-file
A custom Lua script can be added to the redis-exporter
sidecar by way of the metrics.extraArgs.script
parameter. The pathname of the script must exist on the container, or the redis_exporter
process (and therefore the whole pod) will refuse to start. The script can be provided to the sidecar containers via the metrics.extraVolumes
and metrics.extraVolumeMounts
parameters:
metrics:
extraVolumeMounts:
- name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
mountPath: '{{ printf "/mnt/%s/" (include "common.names.name" .) }}'
readOnly: true
extraVolumes:
- name: '{{ printf "%s-metrics-script-file" (include "common.names.fullname" .) }}'
configMap:
name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
extraArgs:
script: '{{ printf "/mnt/%s/my_custom_metrics.lua" (include "common.names.name" .) }}'
Then deploy the script into the correct location via extraDeploy
:
extraDeploy:
- apiVersion: v1
kind: ConfigMap
metadata:
name: '{{ printf "%s-metrics-script" (include "common.names.fullname" .) }}'
data:
my_custom_metrics.lua: |
-- LUA SCRIPT CODE HERE, e.g.,
return {'bitnami_makes_the_best_charts', '1'}
Redis® may require some changes in the kernel of the host machine to work as expected, in particular increasing the somaxconn
value and disabling transparent huge pages. To do so, you can set up a privileged initContainer
with the sysctlImage
config values, for example:
sysctlImage:
enabled: true
mountHostSys: true
command:
- /bin/sh
- -c
- |-
install_packages procps
sysctl -w net.core.somaxconn=10000
echo never > /host-sys/kernel/mm/transparent_hugepage/enabled
Alternatively, for Kubernetes 1.12+ you can set securityContext.sysctls
which will configure sysctls
for master and slave pods. Example:
securityContext:
sysctls:
- name: net.core.somaxconn
value: "10000"
Note that this will not disable transparent huge tables.
To backup and restore Redis deployments on Kubernetes, you will need to create a snapshot of the data in the source cluster, and later restore it in a new cluster with the new parameters. Follow the instructions below:
Step 1: Backup the deployment
Connect to one of the nodes and start the Redis CLI tool. Then, run the commands below:
$ kubectl exec -it my-release-master-0 bash
$ redis-cli
127.0.0.1:6379> auth your_current_redis_password
OK
127.0.0.1:6379> save
OK
Copy the dump file from the Redis node:
kubectl cp my-release-master-0:/data/dump.rdb dump.rdb -c redis
Step 2: Restore the data on the destination cluster
To restore the data in a new cluster, you will need to create a PVC and then upload the dump.rdb file to the new volume.
Follow the following steps:
In the values.yaml file set the appendonly parameter to no. You can skip this step if it is already configured as no
commonConfiguration: |-
# Enable AOF https://redis.io/topics/persistence#append-only-file
appendonly no
# Disable RDB persistence, AOF persistence already enabled.
save ""
Note that the
Enable AOF
comment belongs to the original config file and what you're actually doing is disabling it. This change will only be neccessary for the temporal cluster you're creating to upload the dump.
Start the new cluster to create the PVCs. Use the command below as an example:
helm install new-redis -f values.yaml . --set cluster.enabled=true --set cluster.slaveCount=3
Now that the PVC were created, stop it and copy the dump.rdp file on the persisted data by using a helping pod.
$ helm delete new-redis
$ kubectl run --generator=run-pod/v1 -i --rm --tty volpod --overrides='
{
"apiVersion": "v1",
"kind": "Pod",
"metadata": {
"name": "redisvolpod"
},
"spec": {
"containers": [{
"command": [
"tail",
"-f",
"/dev/null"
],
"image": "bitnami/minideb",
"name": "mycontainer",
"volumeMounts": [{
"mountPath": "/mnt",
"name": "redisdata"
}]
}],
"restartPolicy": "Never",
"volumes": [{
"name": "redisdata",
"persistentVolumeClaim": {
"claimName": "redis-data-new-redis-master-0"
}
}]
}
}' --image="bitnami/minideb"
$ kubectl cp dump.rdb redisvolpod:/mnt/dump.rdb
$ kubectl delete pod volpod
Restart the cluster:
INFO: The appendonly parameter can be safely restored to your desired value.
helm install new-redis -f values.yaml . --set cluster.enabled=true --set cluster.slaveCount=3
To enable network policy for Redis®, install a networking plugin that implements the Kubernetes NetworkPolicy spec, and set networkPolicy.enabled
to true
.
With NetworkPolicy enabled, only pods with the generated client label will be able to connect to Redis. This label will be displayed in the output after a successful install.
With networkPolicy.ingressNSMatchLabels
pods from other namespaces can connect to Redis. Set networkPolicy.ingressNSPodMatchLabels
to match pod labels in matched namespace. For example, for a namespace labeled redis=external
and pods in that namespace labeled redis-client=true
the fields should be set:
networkPolicy:
enabled: true
ingressNSMatchLabels:
redis: external
ingressNSPodMatchLabels:
redis-client: true
Setting Pod's affinity
This chart allows you to set your custom affinity using the XXX.affinity
parameter(s). Find more information about Pod's affinity in the Kubernetes documentation.
As an alternative, you can use of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset
, XXX.podAntiAffinityPreset
, or XXX.nodeAffinityPreset
parameters.
By default, the chart mounts a Persistent Volume at the /data
path. The volume is created using dynamic volume provisioning. If a Persistent Volume Claim already exists, specify it during installation.
helm install my-release --set master.persistence.existingClaim=PVC_NAME oci://REGISTRY_NAME/REPOSITORY_NAME/redis
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
Name | Description | Value |
---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass |
Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/redis/README.md