bitnamicharts/kubeapps

Verified Publisher

By VMware

Updated 9 days ago

Bitnami Helm chart for Kubeapps

Helm
Image
Integration & Delivery
Monitoring & Observability
Networking

100K+

Bitnami package for Kubeapps

Kubeapps is a web-based UI for launching and managing applications on Kubernetes. It allows users to deploy trusted applications and operators to control users access to the cluster.

Overview of Kubeapps

TL;DR

helm install my-release oci://registry-1.docker.io/bitnamicharts/kubeapps --namespace kubeapps --create-namespace

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts. Check out the getting started to start deploying apps with Kubeapps.

Looking to use Kubeapps in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.

Introduction

This chart bootstraps a Kubeapps deployment on a Kubernetes cluster using the Helm package manager.

With Kubeapps you can:

Note: Kubeapps 2.0 and onwards supports Helm 3 only. While only the Helm 3 API is supported, in most cases, charts made for Helm 2 will still work.

It also packages the Bitnami PostgreSQL chart, which is required for bootstrapping a deployment for the database requirements of the Kubeapps application.

Prerequisites

  • Kubernetes 1.23+
  • Helm 3.8.0+
  • Administrative access to the cluster to create Custom Resource Definitions (CRDs)
  • PV provisioner support in the underlying infrastructure (required for PostgreSQL database)

Installing the Chart

To install the chart with the release name my-release:

helm install my-release oci://REGISTRY_NAME/REPOSITORY_NAME/kubeapps --namespace kubeapps --create-namespace

Note: You need to substitute the placeholders REGISTRY_NAME and REPOSITORY_NAME with a reference to your Helm chart registry and repository. For example, in the case of Bitnami, you need to use REGISTRY_NAME=registry-1.docker.io and REPOSITORY_NAME=bitnamicharts.

The command deploys Kubeapps on the Kubernetes cluster in the kubeapps namespace. The Parameters section lists the parameters that can be configured during installation.

Caveat: Only one Kubeapps installation is supported per namespace

Once you have installed Kubeapps follow the Getting Started Guide for additional information on how to access and use Kubeapps.

Configuration and installation details

Resource requests and limits

Bitnami charts allow setting resource requests and limits for all containers inside the chart deployment. These are inside the resources value (check parameter table). Setting requests is essential for production workloads and these should be adapted to your specific use case.

To make this process easier, the chart contains the resourcesPreset values, which automatically sets the resources section according to different presets. Check these presets in the bitnami/common chart. However, in production workloads using resourcesPreset is discouraged as it may not fully adapt to your specific needs. Find more information on container resource management in the official Kubernetes documentation.

Backup and restore

To back up and restore Helm chart deployments on Kubernetes, you need to back up the persistent volumes from the source deployment and attach them to a new deployment using Velero, a Kubernetes backup/restore tool. Find the instructions for using Velero in this guide.

Configuring Initial Repositories

By default, Kubeapps will track the Bitnami Application Catalog. To change these defaults, override with your desired parameters the apprepository.initialRepos object present in the values.yaml file.

Enabling Operators

Since v1.9.0 (and by default since v2.0), Kubeapps supports deploying and managing Operators within its dashboard. More information about how to enable and use this feature can be found in this guide.

Exposing Externally

Note: The Kubeapps frontend sets up a proxy to the Kubernetes API service which means that when exposing the Kubeapps service to a network external to the Kubernetes cluster (perhaps on an internal or public network), the Kubernetes API will also be exposed for authenticated requests from that network. It is highly recommended that you use an OAuth2/OIDC provider with Kubeapps to ensure that your authentication proxy is exposed rather than the Kubeapps frontend. This ensures that only the configured users trusted by your Identity Provider will be able to reach the Kubeapps frontend and therefore the Kubernetes API. Kubernetes service token authentication should only be used for users for demonstration purposes only, not production environments.

LoadBalancer Service

The simplest way to expose the Kubeapps Dashboard is to assign a LoadBalancer type to the Kubeapps frontend Service. For example, you can use the following parameter: frontend.service.type=LoadBalancer

Wait for your cluster to assign a LoadBalancer IP or Hostname to the kubeapps Service and access it on that address:

kubectl get services --namespace kubeapps --watch

Ingress

This chart provides support for Ingress resources. If you have an ingress controller installed on your cluster, such as nginx-ingress-controller or contour you can utilize the ingress controller to serve your application.

To enable ingress integration, please set ingress.enabled to true

Hosts

Most likely you will only want to have one hostname that maps to this Kubeapps installation (use the ingress.hostname parameter to set the hostname), however, it is possible to have more than one host. To facilitate this, the ingress.extraHosts object is an array.

Annotations

For annotations, please see this document. Not all annotations are supported by all ingress controllers, but this document does a good job of indicating which annotation is supported by many popular ingress controllers. Annotations can be set using ingress.annotations.

TLS

This chart will facilitate the creation of TLS secrets for use with the ingress controller, however, this is not required. There are four common use cases:

  • Helm generates/manages certificate secrets based on the parameters.
  • The user generates/manages certificates separately.
  • Helm creates self-signed certificates and generates/manages certificate secrets.
  • An additional tool (like cert-manager) manages the secrets for the application.

In the first two cases, it is needed a certificate and a key. We would expect them to look like this:

  • certificate files should look like (and there can be more than one certificate if there is a certificate chain)

    -----BEGIN CERTIFICATE-----
    MIID6TCCAtGgAwIBAgIJAIaCwivkeB5EMA0GCSqGSIb3DQEBCwUAMFYxCzAJBgNV
    ...
    jScrvkiBO65F46KioCL9h5tDvomdU1aqpI/CBzhvZn1c0ZTf87tGQR8NK7v7
    -----END CERTIFICATE-----
    
  • keys should look like:

    -----BEGIN RSA PRIVATE KEY-----
    MIIEogIBAAKCAQEAvLYcyu8f3skuRyUgeeNpeDvYBCDcgq+LsWap6zbX5f8oLqp4
    ...
    wrj2wDbCDCFmfqnSJ+dKI3vFLlEz44sAV8jX/kd4Y6ZTQhlLbYc=
    -----END RSA PRIVATE KEY-----
    
  • If you are going to use Helm to manage the certificates based on the parameters, please copy these values into the certificate and key values for a given ingress.secrets entry.

  • In case you are going to manage TLS secrets separately, please know that you must use a TLS secret with name INGRESS_HOSTNAME-tls (where INGRESS_HOSTNAME is a placeholder to be replaced with the hostname you set using the ingress.hostname parameter).

  • To use self-signed certificates created by Helm, set both ingress.tls and ingress.selfSigned to true.

  • If your cluster has a cert-manager add-on to automate the management and issuance of TLS certificates, set ingress.certManager boolean to true to enable the corresponding annotations for cert-manager.

Parameters

Global parameters
NameDescriptionValue
global.imageRegistryGlobal Docker image registry""
global.imagePullSecretsGlobal Docker registry secret names as an array[]
global.defaultStorageClassGlobal default StorageClass for Persistent Volume(s)""
global.storageClassDEPRECATED: use global.defaultStorageClass instead""
global.security.allowInsecureImagesAllows skipping image verificationfalse
global.compatibility.openshift.adaptSecurityContextAdapt the securityContext sections of the deployment to make them compatible with Openshift restricted-v2 SCC: remove runAsUser, runAsGroup and fsGroup and let the platform use their allowed default IDs. Possible values: auto (apply if the detected running cluster is Openshift), force (perform the adaptation always), disabled (do not perform adaptation)auto
Common parameters
NameDescriptionValue
kubeVersionOverride Kubernetes version""
nameOverrideString to partially override common.names.fullname""
fullnameOverrideString to fully override common.names.fullname""
commonLabelsLabels to add to all deployed objects{}
commonAnnotationsAnnotations to add to all deployed objects{}
extraDeployArray of extra objects to deploy with the release[]
enableIPv6Enable IPv6 configurationfalse
diagnosticMode.enabledEnable diagnostic mode (all probes will be disabled and the command will be overridden)false
diagnosticMode.commandCommand to override all containers in the deployment["sleep"]
diagnosticMode.argsArgs to override all containers in the deployment["infinity"]
Traffic Exposure Parameters
NameDescriptionValue
ingress.enabledEnable ingress record generation for Kubeappsfalse
ingress.apiVersionForce Ingress API version (automatically detected if not set)""
ingress.hostnameDefault host for the ingress recordkubeapps.local
ingress.pathDefault path for the ingress record/
ingress.pathTypeIngress path typeImplementationSpecific
ingress.annotationsAdditional annotations for the Ingress resource. To enable certificate autogeneration, place here your cert-manager annotations.{}
ingress.tlsEnable TLS configuration for the host defined at ingress.hostname parameterfalse
ingress.selfSignedCreate a TLS secret for this ingress record using self-signed certificates generated by Helmfalse
ingress.extraHostsAn array with additional hostname(s) to be covered with the ingress record[]
ingress.extraPathsAn array with additional arbitrary paths that may need to be added to the ingress under the main host[]
ingress.extraTlsTLS configuration for additional hostname(s) to be covered with this ingress record[]
ingress.secretsCustom TLS certificates as secrets[]
ingress.ingressClassNameIngressClass that will be be used to implement the Ingress (Kubernetes 1.18+)""
ingress.extraRulesAdditional rules to be covered with this ingress record[]
Kubeapps packaging options
NameDescriptionValue
packaging.helm.enabledEnable the standard Helm packaging.true
packaging.carvel.enabledEnable support for the Carvel (kapp-controller) packaging.false
packaging.flux.enabledEnable support for Flux (v2) packaging.false
Frontend parameters
NameDescriptionValue
frontend.image.registryNGINX image registryREGISTRY_NAME
frontend.image.repositoryNGINX image repositoryREPOSITORY_NAME/nginx
frontend.image.digestNGINX image digest in the way sha256:aa.... Please note this parameter, if set, will override the tag""
frontend.image.pullPolicyNGINX image pull policyIfNotPresent
frontend.image.pullSecretsNGINX image pull secrets[]
frontend.image.debugEnable image debug modefalse
frontend.proxypassAccessTokenAsBearerUse access_token as the Bearer when talking to the k8s api serverfalse
frontend.proxypassExtraSetHeaderSet an additional proxy header for all requests proxied via NGINX""
frontend.largeClientHeaderBuffersSet large_client_header_buffers in NGINX config4 32k
frontend.replicaCountNumber of frontend replicas to deploy2
frontend.updateStrategy.typeFrontend deployment strategy type.RollingUpdate
frontend.resourcesPresetSet container resources according to one common preset (allowed values: none, nano, micro, small, medium, large, xlarge, 2xlarge). This is ignored if frontend.resources is set (frontend.resources is recommended for production).micro
frontend.resourcesSet container requests and limits for different resources like CPU or memory (essential for production workloads){}
frontend.extraEnvVarsArray with extra environment variables to add to the NGINX container[]
frontend.extraEnvVarsCMName of existing ConfigMap containing extra env vars for the NGINX container""
frontend.extraEnvVarsSecretName of existing Secret containing extra env vars for the NGINX container

Note: the README for this chart is longer than the DockerHub length limit of 25000, so it has been trimmed. The full README can be found at https://github.com/bitnami/charts/blob/main/bitnami/kubeapps/README.md

Docker Pull Command

docker pull bitnamicharts/kubeapps
Bitnami