bitnamicharts/kube-prometheus
Bitnami Helm chart for Prometheus Operator
1M+
Prometheus Operator provides easy monitoring definitions for Kubernetes services and deployment and management of Prometheus instances.
Overview of Prometheus Operator
Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
helm install my-release oci://registry-1.docker.io/bitnamicharts/kube-prometheus
Looking to use Prometheus Operator in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
This chart bootstraps Prometheus Operator on Kubernetes using the Helm package manager.
In the default configuration the chart deploys the following components on the Kubernetes cluster:
:warning: IMPORTANT
Only one instance of the Prometheus Operator component should be running in the cluster. If you wish to deploy this chart to manage multiple instances of Prometheus in your Kubernetes cluster, you have to disable the installation of the Prometheus Operator component using the operator.enabled=false
chart installation argument.
Bitnami charts can be used with Kubeapps for deployment and management of Helm Charts in clusters.
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kube-prometheus
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The command deploys kube-prometheus on the Kubernetes cluster in the default configuration. The configuration section lists the parameters that can be configured during installation.
Tip: List all releases using
helm list
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
It is strongly recommended to use immutable tags in a production environment. This ensures your deployment does not change automatically if the same tag is updated with a different image.
Bitnami will release a new chart updating its containers if a new version of the main container, significant changes, or critical vulnerabilities exist.
The following values have been deprecated. See Upgrading below.
prometheus.additionalScrapeConfigsExternal.enabled
prometheus.additionalScrapeConfigsExternal.name
prometheus.additionalScrapeConfigsExternal.key
It is possible to inject externally managed scrape configurations via a Secret by setting prometheus.additionalScrapeConfigs.enabled
to true
and prometheus.additionalScrapeConfigs.type
to external
. The secret must exist in the same namespace as the chart deployment. Set the secret name using the parameter prometheus.additionalScrapeConfigs.external.name
, and the key containing the additional scrape configuration using the prometheus.additionalScrapeConfigs.external.key
.
prometheus.additionalScrapeConfigs.enabled=true
prometheus.additionalScrapeConfigs.type=external
prometheus.additionalScrapeConfigs.external.name=kube-prometheus-prometheus-scrape-config
prometheus.additionalScrapeConfigs.external.key=additional-scrape-configs.yaml
It is also possible to define scrape configurations to be managed by the Helm chart by setting prometheus.additionalScrapeConfigs.enabled
to true
and prometheus.additionalScrapeConfigs.type
to internal
. You can then use prometheus.additionalScrapeConfigs.internal.jobList
to define a list of additional scrape jobs for Prometheus.
prometheus.additionalScrapeConfigs.enabled=true
prometheus.additionalScrapeConfigs.type=internal
prometheus.additionalScrapeConfigs.internal.jobList=
- job_name: 'opentelemetry-collector'
# metrics_path defaults to '/metrics'
# scheme defaults to 'http'.
static_configs:
- targets: ['opentelemetry-collector:8889']
For more information, see the additional scrape configuration documentation.
It is possible to inject externally managed Prometheus alert relabel configurations via a Secret by setting prometheus.additionalAlertRelabelConfigsExternal.enabled
to true
. The secret must exist in the same namespace as the chart deployment.
Set the secret name using the parameter prometheus.additionalAlertRelabelConfigsExternal.name
, and the key containing the additional alert relabel configuration using the prometheus.additionalAlertRelabelConfigsExternal.key
. For instance, if you created a secret named kube-prometheus-prometheus-alert-relabel-config
and it contains a file named additional-alert-relabel-configs.yaml
, use the parameters below:
prometheus.additionalAlertRelabelConfigsExternal.enabled=true
prometheus.additionalAlertRelabelConfigsExternal.name=kube-prometheus-prometheus-alert-relabel-config
prometheus.additionalAlertRelabelConfigsExternal.key=additional-alert-relabel-configs.yaml
To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
This chart allows setting custom Pod affinity using the XXX.affinity
parameter(s). Find more information about Pod's affinity in the Kubernetes documentation.
As an alternative, use one of the preset configurations for pod affinity, pod anti-affinity, and node affinity available at the bitnami/common chart. To do so, set the XXX.podAffinityPreset
, XXX.podAntiAffinityPreset
, or XXX.nodeAffinityPreset
parameters.
Name | Description | Value |
---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) | "" |
global.security.allowInsecureImages | Allows skipping image verification | false |
global.compatibility.openshift.adaptSecurityContext | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion | Force target Kubernetes version (using Helm capabilities if not set) | "" |
nameOverride | String to partially override kube-prometheus.name template with a string (will prepend the release name) | "" |
fullnameOverride | String to fully override kube-prometheus.fullname template with a string | "" |
namespaceOverride | String to fully override common.names.namespace | "" |
commonAnnotations | Annotations to add to all deployed objects | {} |
commonLabels | Labels to add to all deployed objects | {} |
extraDeploy | Array of extra objects to deploy with the release | [] |
clusterDomain | Kubernetes cluster domain name | cluster.local |
Name | Description | Value |
---|---|---|
operator.enabled | Deploy Prometheus Operator to the cluster | true |
operator.image.registry | Prometheus Operator image registry | REGISTRY_NAME |
operator.image.repository | Prometheus Operator image repository | REPOSITORY_NAME/prometheus-operator |
operator.image.digest | Prometheus Operator image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
operator.image.pullPolicy | Prometheus Operator image pull policy | IfNotPresent |
operator.image.pullSecrets | Specify docker-registry secret names as an array | [] |
operator.extraArgs | Additional arguments passed to Prometheus Operator | [] |
operator.command | Override default container command (useful when using custom images) | [] |
operator.args | Override default container args (useful when using custom images) | [] |
operator.lifecycleHooks | for the Prometheus Operator container(s) to automate configuration before or after startup | {} |
operator.extraEnvVars | Array with extra environment variables to add to Prometheus Operator nodes | [] |
operator.extraEnvVarsCM | Name of existing ConfigMap containing extra env vars for Prometheus Operator nodes | "" |
operator.extraEnvVarsSecret | Name of existing Secret containing extra env vars for Prometheus Operator nodes | "" |
operator.extraVolumes | Optionally specify extra list of additional volumes for the Prometheus Operator pod(s) | [] |
operator.extraVolumeMounts | Optionally specify extra list of additional volumeMounts for the Prometheus Operator container(s) | [] |
operator.sidecars | Add additional sidecar containers to the Prometheus Operator pod(s) | [] |
operator.initContainers | Add additional init containers to the Prometheus Operator pod(s) | [] |
operator.automountServiceAccountToken | Mount Service Account token in pod | true |
operator.hostAliases | Add deployment host aliases | [] |
operator.serviceAccount.create | Specify whether to create a ServiceAccount for Prometheus Operator | true |
operator.serviceAccount.name | The name of the ServiceAccount to create | "" |
operator.serviceAccount.automountServiceAccountToken | Automount service account token for the server service account | false |
operator.serviceAccount.annotations | Annotations for service account. Evaluated as a template. Only used if create is true . | {} |
operator.schedulerName | Name of the Kubernetess scheduler (other than default) | "" |
operator.terminationGracePeriodSeconds | In seconds, time the given to the Prometheus Operator pod needs to terminate gracefully | "" |
operator.topologySpreadConstraints | Topology Spread Constraints for pod assignment | [] |
operator.podSecurityContext.enabled | Enable pod security context | true |
operator.podSecurityContext.fsGroupChangePolicy | Set filesystem group change policy | Always |
operator.podSecurityContext.sysctls | Set kernel settings using the sysctl interface | [] |
operator.podSecurityContext.supplementalGroups | Set filesystem extra groups | [] |
operator.podSecurityContext.fsGroup | Group ID for the container filesystem | 1001 |
operator.containerSecurityContext.enabled | Enabled containers' Security Context |
Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/kube-prometheus/README.md