bitnamicharts/kubeapps
Bitnami Helm chart for Kubeapps
100K+
Kubeapps is a web-based UI for launching and managing applications on Kubernetes. It allows users to deploy trusted applications and operators to control users access to the cluster.
helm install my-release oci://registry-1.docker.io/bitnamicharts/kubeapps --namespace kubeapps --create-namespace
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
. Check out the getting started to start deploying apps with Kubeapps.
Looking to use Kubeapps in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
This chart bootstraps a Kubeapps deployment on a Kubernetes cluster using the Helm package manager.
With Kubeapps you can:
Note: Kubeapps 2.0 and onwards supports Helm 3 only. While only the Helm 3 API is supported, in most cases, charts made for Helm 2 will still work.
It also packages the Bitnami PostgreSQL chart, which is required for bootstrapping a deployment for the database requirements of the Kubeapps application.
To install the chart with the release name my-release
:
helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps --namespace kubeapps --create-namespace
Note: You need to substitute the placeholders
REGISTRY_NAME
andREPOSITORY_NAME
with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to useREGISTRY_NAME=registry-1.docker.io
andREPOSITORY_NAME=bitnamicharts
.
The command deploys Kubeapps on the Kubernetes cluster in the kubeapps
namespace. The Parameters section lists the parameters that can be configured during installation.
Caveat: Only one Kubeapps installation is supported per namespace
Once you have installed Kubeapps follow the Getting Started Guide for additional information on how to access and use Kubeapps.
Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources
value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.
To make this process easier, the chart contains the resourcesPreset
values, which automatically sets the resources
section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset
is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.
To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.
By default, Kubeapps will track the Bitnami Application Catalog. To change these defaults, override with your desired parameters the apprepository.initialRepos
object present in the values.yaml file.
Since v1.9.0 (and by default since v2.0), Kubeapps supports deploying and managing Operators within its dashboard. More information about how to enable and use this feature can be found in this guide.
Note: The Kubeapps frontend sets up a proxy to the Kubernetes API service which means that when exposing the Kubeapps service to a network external to the Kubernetes cluster (perhaps on an internal or public network), the Kubernetes API will also be exposed for authenticated requests from that network. It is highly recommended that you use an OAuth2/OIDC provider with Kubeapps to ensure that your authentication proxy is exposed rather than the Kubeapps frontend. This ensures that only the configured users trusted by your Identity Provider will be able to reach the Kubeapps frontend and therefore the Kubernetes API. Kubernetes service token authentication should only be used for users for demonstration purposes only, not production environments.
LoadBalancer Service
The simplest way to expose the Kubeapps Dashboard is to assign a LoadBalancer type to the Kubeapps frontend Service. For example, you can use the following parameter: frontend.service.type=LoadBalancer
Wait for your cluster to assign a LoadBalancer IP or Hostname to the kubeapps
Service and access it on that address:
kubectl get services --namespace kubeapps --watch
Ingress
This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application.
To enable ingress integration, please set ingress.enabled
to true
Most likely you will only want to have one hostname that maps to this Kubeapps installation (use the ingress.hostname
parameter to set the hostname), however, it is possible to have more than one host. To facilitate this, the ingress.extraHosts
object is an array.
For annotations, please see this document. Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers. Annotations can be set using ingress.annotations
.
This chart will facilitate the creation of TLS secrets for use with the ingress controller, however, this is not required. There are four common use cases:
In the first two cases, it is needed a certificate and a key. We would expect them to look like this:
certificate files should look like (and there can be more than one certificate if there is a certificate chain)
-----BEGIN CERTIFICATE-----
MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
...
jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
-----END CERTIFICATE-----
keys should look like:
-----BEGIN RSA PRIVATE KEY-----
MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
...
wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
-----END RSA PRIVATE KEY-----
If you are going to use Helm to manage the certificates based on the parameters, please copy these values into the certificate
and key
values for a given ingress.secrets
entry.
In case you are going to manage TLS secrets separately, please know that you must use a TLS secret with name INGRESS_HOSTNAME-tls (where INGRESS_HOSTNAME is a placeholder to be replaced with the hostname you set using the ingress.hostname
parameter).
To use self-signed certificates created by Helm, set both ingress.tls
and ingress.selfSigned
to true
.
If your cluster has a cert-manager add-on to automate the management and issuance of TLS certificates, set ingress.certManager
boolean to true to enable the corresponding annotations for cert-manager.
Name | Description | Value |
---|---|---|
global.imageRegistry | Global Docker image registry | "" |
global.imagePullSecrets | Global Docker registry secret names as an array | [] |
global.defaultStorageClass | Global default StorageClass for Persistent Volume(s) | "" |
global.storageClass | DEPRECATED: use global.defaultStorageClass instead | "" |
global.security.allowInsecureImages | Allows skipping image verification | false |
global.compatibility.openshift.adaptSecurityContext | Adapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation) | auto |
Name | Description | Value |
---|---|---|
kubeVersion | Override Kubernetes version | "" |
nameOverride | String to partially override common.names.fullname | "" |
fullnameOverride | String to fully override common.names.fullname | "" |
commonLabels | Labels to add to all deployed objects | {} |
commonAnnotations | Annotations to add to all deployed objects | {} |
extraDeploy | Array of extra objects to deploy with the release | [] |
enableIPv6 | Enable IPv6 configuration | false |
diagnosticMode.enabled | Enable diagnostic mode (all probes will be disabled and the command will be overridden) | false |
diagnosticMode.command | Command to override all containers in the deployment | ["sleep"] |
diagnosticMode.args | Args to override all containers in the deployment | ["infinity"] |
Name | Description | Value |
---|---|---|
ingress.enabled | Enable ingress record generation for Kubeapps | false |
ingress.apiVersion | Force Ingress API version (automatically detected if not set) | "" |
ingress.hostname | Default host for the ingress record | kubeapps.local |
ingress.path | Default path for the ingress record | / |
ingress.pathType | Ingress path type | ImplementationSpecific |
ingress.annotations | Additional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations. | {} |
ingress.tls | Enable TLS configuration for the host defined at ingress.hostname parameter | false |
ingress.selfSigned | Create a TLS secret for this ingress record using self-signed certificates generated by Helm | false |
ingress.extraHosts | An array with additional hostname(s) to be covered with the ingress record | [] |
ingress.extraPaths | An array with additional arbitrary paths that may need to be added to the ingress under the main host | [] |
ingress.extraTls | TLS configuration for additional hostname(s) to be covered with this ingress record | [] |
ingress.secrets | Custom TLS certificates as secrets | [] |
ingress.ingressClassName | IngressClass that will be be used to implement the Ingress (Kubernetes 1.18+) | "" |
ingress.extraRules | Additional rules to be covered with this ingress record | [] |
Name | Description | Value |
---|---|---|
packaging.helm.enabled | Enable the standard Helm packaging. | true |
packaging.carvel.enabled | Enable support for the Carvel (kapp-controller) packaging. | false |
packaging.flux.enabled | Enable support for Flux (v2) packaging. | false |
Name | Description | Value |
---|---|---|
frontend.image.registry | NGINX image registry | REGISTRY_NAME |
frontend.image.repository | NGINX image repository | REPOSITORY_NAME/nginx |
frontend.image.digest | NGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag | "" |
frontend.image.pullPolicy | NGINX image pull policy | IfNotPresent |
frontend.image.pullSecrets | NGINX image pull secrets | [] |
frontend.image.debug | Enable image debug mode | false |
frontend.proxypassAccessTokenAsBearer | Use access_token as the Bearer when talking to the k8s api server | false |
frontend.proxypassExtraSetHeader | Set an additional proxy header for all requests proxied via NGINX | "" |
frontend.largeClientHeaderBuffers | Set large_client_header_buffers in NGINX config | 4 32k |
frontend.replicaCount | Number of frontend replicas to deploy | 2 |
frontend.updateStrategy.type | Frontend deployment strategy type. | RollingUpdate |
frontend.resourcesPreset | Set container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if frontend.resources is set (frontend.resources is recommended for production). | micro |
frontend.resources | Set container requests and limits for different resources like CPU or memory (essential for production workloads) | {} |
frontend.extraEnvVars | Array with extra environment variables to add to the NGINX container | [] |
frontend.extraEnvVarsCM | Name of existing ConfigMap containing extra env vars for the NGINX container | "" |
frontend.extraEnvVarsSecret | Name of existing Secret containing extra env vars for the NGINX container |
Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/kubeapps/README.md