manifest-controller watches remote Kubernetes manifests and applies them to your cluster.
Think of the
kubelet but for your cluster. The
kubelet runs on a single node and watches a source of Kubernetes
manifests, e.g. on some path on the filesystem or a remote HTTP endpoint, and tells the container runtime on that node
to run the respective containers.
manifest-controller runs in a single cluster and watches a source of Kubernetes manifests and tells the Kubernetes API server of that
cluster to run the respective manifests. Currently, any OAuth2 protected HTTP endpoint is supported as a source.
$ manifest-controller \ --cluster=http://127.0.0.1:8001 \ --source=https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/es-svc.yaml \ --source=https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/es-rc.yaml \ --source=https://raw.githubusercontent.com/kubernetes/kubernetes/master/examples/elasticsearch/service-account.yaml \ [--source=another-source ... \] --token=a-personal-github-access-token \
This watches the three example manifest files of the
github.com/kubernetes repostiory on the
If there's any changes pushed to those files on the master branch,
manifest-controller will apply them.
Note, that any HTTP server that serves
yaml files and uses the
Authorization: Bearer <token> method to
authenticate clients will work. So instead of an HTTP server pointing to your Github repository content, as seen above,
you could easily run your own more tailored solution, e.g. https://manifests.me/?cluster=foo&env=bar could return
yaml specifically for that cluster.
The controller will only
kubectl apply the files it watches, so it cannot handle deletions currently.
Furthermore, API objects that don't handle
apply correctly will also not work as expected, e.g. daemon sets aren't
apply at the moment.