Supported tags and respective
For more information about this image and its history, please see the relevant manifest file (
library/kong). This image is updated via pull requests to the
docker-library/official-images GitHub repo.
For detailed information about the virtual/transfer sizes and individual layers of each of the above supported tags, please see the
repos/kong/tag-details.md file in the
docker-library/repo-info GitHub repo.
What is Kong?
Kong was built to secure, manage and extend Microservices & APIs. If you're building for web, mobile or IoT (Internet of Things) you will likely end up needing to implement common functionality on top of your actual software. Kong can help by acting as a gateway for any HTTP resource while providing logging, authentication and other functionality through plugins.
Powered by NGINX and Cassandra with a focus on high performance and reliability, Kong runs in production at Mashape where it has handled billions of API requests for over ten thousand APIs.
Kong's documentation can be found at getkong.org/docs.
How to use this image
First, Kong requires a running Cassandra 2.2.x or PostgreSQL 9.4/9.5 cluster before it starts. You can either use the official Cassandra/PostgreSQL containers, or use your own.
1. Link Kong to either a Cassandra or PostgreSQL container
It's up to you to decide which datastore between Cassandra or PostgreSQL you want to use, since Kong supports both.
Start a Cassandra container by executing:
$ docker run -d --name kong-database \ -p 9042:9042 \ cassandra:2.2
Start a PostgreSQL container by executing:
docker run -d --name kong-database \ -p 5432:5432 \ -e "POSTGRES_USER=kong" \ -e "POSTGRES_DB=kong" \ postgres:9.4
Once the database is running, we can start a Kong container and link it to the database container, and configuring the
KONG_DATABASE environment variable with either
postgres depending on which database you decided to use:
$ docker run -d --name kong \ --link kong-database:kong-database \ -e "KONG_DATABASE=cassandra" \ -e "KONG_CASSANDRA_CONTACT_POINTS=kong-database" \ -e "KONG_PG_HOST=kong-database" \ -p 8000:8000 \ -p 8443:8443 \ -p 8001:8001 \ -p 7946:7946 \ -p 7946:7946/udp \ kong
If everything went well, and if you created your container with the default ports, Kong should be listening on your host's
8443 (proxy SSL) and
8001 (admin api) ports. Port
7946 (cluster) is being used only by other Kong nodes.
You can now read the docs at getkong.org/docs to learn more about Kong.
2. Use Kong with a custom configuration (and a custom Cassandra/PostgreSQL cluster)
You can override any property of the Kong configuration file with environment variables. Just prepend any Kong configuration property with the
KONG_ prefix, for example:
$ docker run -d --name kong \ -e "KONG_LOG_LEVEL=info" \ -e "KONG_CUSTOM_PLUGINS=piwik-log" \ -e "KONG_PG_HOST=22.214.171.124" \ -p 8000:8000 \ -p 8443:8443 \ -p 8001:8001 \ -p 7946:7946 \ -p 7946:7946/udp \ kong
Reload Kong in a running container
If you change your custom configuration, you can reload Kong (without downtime) by issuing:
$ docker exec -it kong kong reload
This will run the
kong reload command in your container.
View license information for the software contained in this image.
Supported Docker versions
This image is officially supported on Docker version 1.12.2.
Support for older versions (down to 1.6) is provided on a best-effort basis.
Please see the Docker installation documentation for details on how to upgrade your Docker daemon.
Documentation for this image is stored in the
kong/ directory of the
docker-library/docs GitHub repo. Be sure to familiarize yourself with the repository's
README.md file before attempting a pull request.
If you have any problems with or questions about this image, please contact us through a GitHub issue. If the issue is related to a CVE, please check for a
cve-tracker issue on the
official-images repository first.
You can also reach many of the official image maintainers via the
#docker-library IRC channel on Freenode.
You are invited to contribute new features, fixes, or updates, large or small; we are always thrilled to receive pull requests, and do our best to process them as fast as we can.
Before you start to code, we recommend discussing your plans through a GitHub issue, especially for more ambitious contributions. This gives other contributors a chance to point you in the right direction, give you feedback on your design, and help you find out if someone else is working on the same thing.