Public | Automated Build

Last pushed: 2 years ago
Short Description
Kubernetes - "Client" for DNS example
Full Description


Opinionated Terraform module for creating a Highly Available Kubernetes cluster running on
CoreOS (any channel) in an AWS VPC. With prerequisites installed make all will simply spin up a default cluster; and, since it is based on Terraform, customization is much easier
than Cloud Formation.

The default configuration includes Kubernetes addons: DNS, Dashboard and UI.


# prereqs
$ brew update && brew install awscli cfssl jq kubernetes-cli terraform

# build artifacts and deploy cluster
$ make all

# nodes
$ kubectl get nodes

# addons
$ kubectl get pods --namespace=kube-system

# verify dns - run after addons have fully loaded
$ kubectl exec busybox -- nslookup kubernetes

# open dashboard
$ make dashboard

# obliterate the cluster and all artifacts
$ make clean


  • TLS certificate generation


CoreOS (899.17.0)

  • etcd DNS Discovery Bootstrap

Kubernetes (v1.2.4)

Terraform (v0.6.16)

  • CoreOS AMI sourcing
  • Terraform Pattern Modules


Quick install prerequisites on Mac OS X with Homebrew:

$ brew update && brew install awscli cfssl jq kubernetes-cli terraform

Tested with prerequisite versions:

$ aws --version
aws-cli/1.10.26 Python/2.7.10 Darwin/15.4.0 botocore/1.4.15

$ cfssl version
Version: 1.2.0
Revision: dev
Runtime: go1.6

$ jq --version

$ kubectl version --client
Client Version: version.Info{Major:"1", Minor:"2", GitVersion:"v1.2.4+3eed1e3", GitCommit:"3eed1e3be6848b877ff80a93da3785d9034d0a4f", GitTreeState:"not a git tree"}

$ terraform --version
Terraform v0.6.16

Launch Cluster

make all will create:

  • AWS Key Pair (PEM file)
  • client and server TLS assets
  • s3 bucket for TLS assets (secured by IAM roles for master and worker nodes)
  • Cloud Watch log group for docker logs
  • AWS VPC with private and public subnets
  • Route 53 internal zone for VPC
  • Etcd cluster bootstrapped from Route 53
  • High Availability Kubernetes configuration (masters running on etcd nodes)
  • Autoscaling worker node group across subnets in selected region
  • kube-system namespace and addons: DNS, UI, Dashboard
$ make all

To open dashboard:

$ make dashboard

To destroy, remove and generally undo everything:

$ make clean

make all and make clean should be idempotent - should an error occur simply try running
the command again and things should recover from that point.

How Tack works

Tack Phases

Tack works in three phases:

  1. Pre-Terraform
  2. Terraform
  3. Post-Terraform


The purpose of this phase is to prep the environment for Terraform execution. Some tasks are
hard or messy to do in Terraform - a little prep work can go a long way here. Determining
the CoreOS AMI for a given region, channel and VM Type for instance is easy enough to do
with a simple shell script.


Terraform does the heavy lifting of resource creation and sequencing. Tack uses local
modules to partition the work in a logical way. Although it is of course possible to do all
of the Terraform work in a single .tf file or collection of .tf files, it becomes
unwieldy quickly and impossible to debug. Breaking the work into local modules makes the
flow much easier to follow and provides the basis for composing variable solutions down the track - for example converting the worker Auto Scaling Group to use spot instances.


Once the infrastructure has been configured and instantiated it will take some type for it
to settle. Waiting for the 'master' ELB to become healthy is an example of this.


Like many great tools, tack has started out as a collection of scripts, makefiles and other tools. As tack matures and patterns crystalize it will evolve to a Terraform plugin and perhaps a Go-based cli tool for 'init-ing' new cluster configurations. The tooling will compose Terraform modules into a solution based on user preferences - think npm init or better yet yeoman.


Other Terraform Projects


Docker Pull Command
Source Repository