Create a service and make sure the name of the service is indeed the name of the application. This is because I use the /registry/services/spec/default/<name of service>, name of service being the frontend that is written into the haproxy.cfg. Now, each service needs to have a selector. Something like app: myapp. All the applications that you deploy using a Replication Controller, must also have a selector called myapp. This is because kubernetes will automatically assign the containers to be load balanced by the service where the selectors match. If I spin up containers or increase the replicas, as long as the selector matches that of my service. Kubernetes will connect the dots. All we care about at this point is the service, the containers behind it can go up down etc. We don't care since kubernetes will manage that part.
This haproxy container, will listen on port 80 of a a node running kube-proxy. The kube-proxy service must be running, so that the host machine can resolve the services IP. The kube-proxy is the service re-writing iptables and making it so that there is node to pod and pod to pod communication.
The container exposes port 80 and 1936 for the haproxy statistics. But these are internal IP's and wont be routable externally. So you must start the haproxy using the docker -p flag, to forward 80 and 1936, to host ports on the host machine. This way they will be accessible externally.
docker run -p 80:80 -p 1936:1935 redventures/kubernetes-confd-etcd-haproxy
The frontend is the name of the service as mentioned above. If my service is called "my-service". The frontend will be my-service. The backend is the my-service ClusterIP.
Example frontend using my-service:
acl host_my-services hdr(host) -i my-services
use_backend my-services if host_my-services ##### HTTPSMode == disabled, forward to backend
To access my container, I would edit the hosts file to say
<IP of host with HAproxy> my-services
Now when I curl my-services, it's hitting port 80 of the host running haproxy.
Example of the backend
server kubernetes 10.10.10.10:80 check 2s
Now the backend will map to the clusterIP which is given to the service. By default services load balance requests. Although, we do not care about the individual IP's given to each container. We could write those in here using the /registry/services/endpoints/default but why not make it easy and let the service handle keeping track of the containers. This will also check to make sure services are up every 2 seconds. If the services goes down, kubernetes will spin it back up and the haproxy checks every 10 seconds for configuration changes. So if it spins up on a different ip, it will catch it for us.
The service I used for this, serves on port 80 but the target port is 3000. This is trivial, since kubernetes knows the containers are running on 300 already, but for the development of this, that is what I used.
All traffic will route to port 80 on the host in which HAproxy container is running on. Then be directed to the service on port 80, and the service has an IP and a TargetPort. The TargetPort for example on a node server, is 3000. The Services will serve the requests on port 80 and redirect them to port 3000 on the hosts. As I said, for development I did use a target port, but you don't need one. The service will know how to route the request.
The services must live in the default namespace. The confd template engine looks there for the services. You can have multiple services, but they must life there.
ETCD cluster, I have hardcoded the cluster into the bootstrap-confd-haproxy.sh I recommend that when starting the container, you change the IP to be your IP.
STILL TO DO
I will fix the ETCD IP issue discussed above, and allow the user to start the container and pass in an environment variable which will consist of the etcd cluster IP.