This is a fully dynamic docker link ambassador.
For information on link ambassadors see:
The problem with linking is that links are static. When a container which is
being linked to is restarted it very likely has a new IP address. Any container
which is linked to this restarted container will also need to be restarted in
order to pick up this new IP address. Therefore linked containers can often
have a cascading effect of needing to restart many containers in order to update
Ambassadors are seen as a way to mitigate this, but as used in the example they
are only marginally useful in a multi-host setup and much less useful in a single
The solution will very likey be added in Docker at some point, but until that
time, we need something a bit more dynamic.
Grand Ambassador reads all the exposed ports of the passed in container and
creates a proxy for each of those ports on all interfaces in the ambassador.
Once the ambassador is started it will begin to monitor the Docker event stream
for potential changes to these settings and adjust the proxy settings
accordingly, without restarting the ambassador container.
docker run -d -v /var/run/docker.sock:/docker.sock \ cpuguy83/docker-grand-ambassador \ -name container_name \ -sock /docker.sock
Usage of /usr/bin/grand-ambassador: -log-level="info": Set debug logging -name=: Name/ID of container to ambassadorize -sock="/var/run/docker.sock": Path to docker socket -tls=false: Enable TLS for connecting to Docker socket -tlscacert="/root/.docker/ca.pem": Path to TLS ca cert -tlscert="/root/.docker/cert.pem": Path to TLS cert -tlskey="/root/.docker/key.pem": Path to TLS key -tlsverify=false: Enable TLS verification of the Docker host -wait=true: Wait for container to be created if it doesn't exist on start
docker run -d --expose 6379 --name redis redis docker run -d -v /var/run/docker.sock:/var/run/docker.sock \ --name redis_ambassador \ cpuguy83/docker-grand-ambassador -name redis docker run --rm --link redis_ambassador:db crosbymichael/redis-cli -h db ping
It's a proxy!
Hello, it looks promising idea, but further documentation and examples use cases are needed, even for same host functionality. What changes in relation to linking? Is this watching acting automatically to the underlying docker private IP network? Do we just need to link to it every single container along with other linking? Could this be used for linking - communication to container based on another host? What would be to move a mysql container to another host without needed to restart everything? How about adding a second web or application server to another host.