Public | Automated Build

Last pushed: 2 months ago
Short Description
Image containing the Azul System's Open JDK 8 implementation.
Full Description


This project is a simple Docker image that provides access to the
Azul Systems JDK. It is intended
for running JVM applications, not building and testing them. If you
need to build a JVM application, look at this project.



Type ./ to build the image.


Docker will automatically install the newly built image into the cache.

Tips and Tricks

Launching The Image

Use ./ to exercise the image.

Example Usage

There are samples on how to use the image in the examples folder and we will
highlight some options here as well.

The basic idea is to use a Bash script to launch the JVM so you can apply
the appropriate switches that are useful in a containerized environment. You
should copy that script into your image and make the script the entrypoint
to your image.

The Dockerfile:

FROM kurron/docker-azul-jdk-8:latest


ADD /home/microservice/
RUN chmod a+x /home/microservice/
ADD Hello.class /home/microservice/Hello.class

# Switch to the non-root user
USER microservice

# Run the simple program
ENTRYPOINT ["/home/microservice/", "Hello"]

The Bash script for a single core host:



CMD="${JAVA_HOME}/bin/java \
    -server \
    -XX:+UnlockExperimentalVMOptions \
    -XX:+UseCGroupMemoryLimitForHeap \
    -XX:+ScavengeBeforeFullGC \
    -XX:+CMSScavengeBeforeRemark \
    -XX:+UseSerialGC \
    -XX:MinHeapFreeRatio=20 \
    -XX:MaxHeapFreeRatio=40 \
    -XX:GCTimeRatio=4 \
    -XX:AdaptiveSizePolicyWeight=90${JVM_DNS_TTL} \

echo ${CMD}
exec ${CMD}

The Bash script for a multi-core host:



CMD="${JAVA_HOME}/bin/java \
    -server \
    -XX:+UnlockExperimentalVMOptions \
    -XX:+UseCGroupMemoryLimitForHeap \
    -XX:+ScavengeBeforeFullGC \
    -XX:+CMSScavengeBeforeRemark \
    -XX:ParallelGCThreads=${JVM_GC_THREADS} \
    -XX:+UseConcMarkSweepGC \
    -XX:+CMSParallelRemarkEnabled \
    -XX:+UseCMSInitiatingOccupancyOnly \
    -XX:CMSInitiatingOccupancyFraction=70 \${JVM_DNS_TTL} \

echo ${CMD}
exec ${CMD}

Please note that it is very important to use exec to launch your script
or you will have signal issues.

You can control how much CPU and RAM the container see via Docker's
--cpus, --memory and --memory-swap switches.

Observed JVM Memory Behavior

Using VisualVM I was able to watch the JVM's heap
and have the following observations. Test were run with
OpenJDK Runtime Environment (Zulu (build 1.8.0_131-b11).

  1. Docker's --memory switch sets the cgroup settings
  2. exec into a container and run mount | grep cgroup | grep memory, more /sys/fs/cgroup/memory/memory.limit_in_bytes to see the cgroup value
  3. JVM's -XX:+UseCGroupMemoryLimitForHeap only respects the cgroup settings when explicit settings are not provided
  4. setting -Xms and -Xmx can exceed the cgroup setting and what Docker thinks you are using for memory
  5. not specifing heap settings cause the JVM to allocate a much smaller heap, anecdotally about half of the Docker allocation

Your situation will dictate what runtime switches to use. A scheduler, such as Kubernetes,
will only understand the cgroup settings so you can either let the JVM figure out the heap
based on what the scheduler assigns it or specify the heap settings explicitly. If you
specify the heap by hand and get it wrong by exceeding the amount of memory the scheduler
thinks you want to use, you could cause an OOM situation with other containers.
Eventually, the JVM will catch up with the container world but until that day, we'll have
to manage memory settings very carefully.


License and Credits

This project is licensed under the
Apache License Version 2.0, January 2004.

List of Changes

  • removed Docker, Docker Compose and Ansible from the image. Use the build image instead.
  • use azul/zulu-openjdk:8u131 as the base image to be more Kubernetes friendly
  • update to OpenJDK 64-Bit Server VM (Zulu (build 25.131-b11, mixed mode)
Docker Pull Command
Source Repository