chaoskube periodically kills random pods in your Kubernetes cluster.
Test how your system behaves under arbitrary pod failures.
Running it will kill a pod in any namespace every 10 minutes by default.
$ chaoskube ... INFO Targeting cluster at https://kube.you.me INFO Killing pod kube-system/kube-dns-v20-6ikos INFO Killing pod chaoskube/nginx-701339712-u4fr3 INFO Killing pod kube-system/kube-proxy-gke-earthcoin-pool-3-5ee87f80-n72s INFO Killing pod chaoskube/nginx-701339712-bfh2y INFO Killing pod kube-system/heapster-v1.2.0-1107848163-bhtcw INFO Killing pod kube-system/l7-default-backend-v1.0-o2hc9 INFO Killing pod kube-system/heapster-v1.2.0-1107848163-jlfcd INFO Killing pod chaoskube/nginx-701339712-bfh2y INFO Killing pod chaoskube/nginx-701339712-51nt8 ...
chaoskube allows to filter target pods by namespaces, labels and annotations.
See below for details.
chaoskube via go get, make sure your current context points to your target cluster and use the
$ go get -u github.com/linki/chaoskube $ chaoskube --deploy INFO Dry run enabled. I won't kill anything. Use --no-dry-run when you're ready. INFO Targeting cluster at https://kube.you.me INFO Deployed quay.io/linki/chaoskube:v0.5.0
chaoskube will be friendly and not kill anything. When you validated your target cluster you may disable dry-run mode. You can also specify a more aggressive interval and other supported flags for your deployment.
$ chaoskube --interval=1m --no-dry-run --debug --deploy DEBU Using current context from kubeconfig at /Users/you/.kube/config. INFO Targeting cluster at https://kube.you.me DEBU Deploying quay.io/linki/chaoskube:v0.5.0 INFO Deployed quay.io/linki/chaoskube:v0.5.0
$ helm install stable/chaoskube --version 0.5.0 --set interval=1m,dryRun=false
Refer to chaoskube on kubeapps.com to learn how to configure it and to find other useful Helm charts.
Otherwise use the following equivalent manifest file or let it serve as an inspiration.
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: chaoskube spec: replicas: 1 template: metadata: labels: app: chaoskube spec: containers: - name: chaoskube image: quay.io/linki/chaoskube:v0.5.0 args: - --interval=1m - --no-dry-run
If you're running in a Kubernetes cluster and want to target the same cluster then this is all you need to do.
If you want to target a different cluster or want to run it locally specify your cluster via the
--master flag or provide a valid kubeconfig via the
--kubeconfig flag. By default, it uses your standard kubeconfig path in your home. That means, whatever is the current context in there will be targeted.
If you want to increase or decrease the amount of chaos change the interval between killings with the
--interval flag. Alternatively, you can increase the number of replicas of your
chaoskube by default kills any pod in all your namespaces, including system pods and itself.
However, you can limit the search space of
chaoskube by providing label, annotation and namespace selectors.
$ chaoskube --labels 'app=mate,chaos,stage!=production' ... INFO Filtering pods by labels: app=mate,chaos,stage!=production
This selects all pods that have the label
app set to
mate, the label
chaos set to anything and the label
stage not set to
production or unset.
You can filter target pods by namespace selector as well.
$ chaoskube --namespaces 'default,testing,staging' ... INFO Filtering pods by namespaces: default,staging,testing
This will filter for pods in the three namespaces
You can also exclude namespaces and mix and match with the label and annotation selectors.
$ chaoskube \ --labels 'app=mate,chaos,stage!=production' \ --annotations '!scheduler.alpha.kubernetes.io/critical-pod' \ --namespaces '!kube-system,!production' ... INFO Filtering pods by labels: app=mate,chaos,stage!=production INFO Filtering pods by annotations: !scheduler.alpha.kubernetes.io/critical-pod INFO Filtering pods by namespaces: !kube-system,!production
This further limits the search space of the above label selector by also excluding any pods in the
production namespaces as well as ignore all pods that are marked as critical.
The annotation selector can also be used to run
chaoskube as a cluster addon and allow pods to opt-in to being terminated as you see fit. For example, you could run
chaoskube like this:
$ chaoskube --annotations 'chaos.alpha.kubernetes.io/enabled=true' ... INFO Filtering pods by annotations: chaos.alpha.kubernetes.io/enabled=true INFO No victim could be found. If that's surprising double-check your selectors.
Unless you already use that annotation somewhere, this will initially ignore all of your pods. You could then selectively opt-in individual deployments to chaos mode by annotating their pods with
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: my-app spec: replicas: 3 template: metadata: annotations: chaos.alpha.kubernetes.io/enabled: "true" spec: ...
Feel free to create issues or submit pull requests.