Contour is an Ingress controller for Kubernetes that works by deploying the Envoy proxy as a reverse proxy and load balancer. Unlike other Ingress controllers, Contour supports dynamic configuration updates out of the box while maintaining a lightweight profile.
This is an early release so that we can start sharing with the community. Check out the roadmap to see where we plan to go with the project.
And see the launch blog post for our vision of how Contour fits into the larger Kubernetes ecosystem.
Contour is tested with Kubernetes clusters running version 1.7 and later, but should work with earlier versions.
You can try out Contour by creating a deployment from a hosted manifest -- no clone or local install necessary.
What you do need:
- A Kubernetes cluster that supports Service objects of
type: LoadBalancer(AWS Quickstart cluster or Minikube, for example)
kubectlconfigured with admin access to your cluster
See the deployment documentation for more deployment options if you don't meet these requirements.
Add Contour to your cluster
$ kubectl apply -f https://j.hept.io/contour-deployment-rbac
If RBAC isn't enabled on your cluster (for example, if you're on GKE with legacy authorization), run:
$ kubectl apply -f https://j.hept.io/contour-deployment-norbac
This command creates:
- A new namespace
heptio-contourwith two instances of Contour in the namespace
- A Service of
type: LoadBalancerthat points to the Contour instances
- Depending on your configuration, new cloud resources -- for example, ELBs in AWS
See also TLS support for details on configuring TLS support. TLS is available in Contour version 0.3 and later.
If you don't have an application ready to run with Contour, you can explore with kuard.
$ kubectl apply -f https://j.hept.io/contour-kuard-example
This example specifies a default backend for all hosts, so that you can test your Contour install. It's recommended for exploration and testing only, however, because it responds to all requests regardless of the incoming DNS that is mapped. You probably want to run with specific Ingress rules for specific hostnames.
Access your cluster
Now you can retrieve the external address of Contour's load balancer:
$ kubectl get -n heptio-contour service contour -o wide NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR contour 10.106.53.14 a47761ccbb9ce11e7b27f023b7e83d33-2036788482.ap-southeast-2.elb.amazonaws.com 80:30274/TCP 3h app=contour
$ minikube service -n heptio-contour contour --url http://192.168.99.100:30588
How you configure DNS depends on your platform:
- On AWS, create a CNAME record that maps the host in your Ingress object to the ELB address.
- If you have an IP address instead (on GCE, for example), create an A record.
- On Minikube, you can fake DNS by editing
/etc/hostsor you can use the provided example and not have to modify dns on your local machine.
$ kubectl apply -f https://j.hept.io/contour-kuard-minikube-example
This example yaml specifies
kuard.192.168.99.100.nip.io as a specific ingress backend for kuard. It uses nip.io and the minikube ip address to have kuard only respond to http://kuard.192.168.99.100.nip.io. Once that is applied you can visit http://kuard.192.168.99.100.nip.io and see the kuard example application.
More information and documentation
For more deployment options, including uninstalling Contour, see the deployment documentation.
The detailed documentation provides additional information, including an introduction to Envoy and an explanation of how Contour maps key Envoy concepts to Kubernetes.
We've also got an FAQ for short-answer questions and conceptual stuff that doesn't quite belong in the docs.
Thanks for taking the time to join our community and start contributing!
- Please familiarize yourself with the Code of Conduct before contributing.
- See CONTRIBUTING.md for information about setting up your environment, the workflow that we expect, and instructions on the developer certificate of origin that we require.
- Check out the issues and our roadmap.
See the list of releases to find out about feature changes.