Official Repository

Last pushed: a month ago
Short Description
Vault is a tool for securely accessing secrets via a unified interface and tight access control.
Full Description

Supported tags and respective Dockerfile links

Quick reference


Vault is a tool for securely accessing secrets. A secret is anything that you want to tightly control access to, such as API keys, passwords, certificates, and more. Vault provides a unified interface to any secret, while providing tight access control and recording a detailed audit log. For more information, please see:

Using the Container

We chose Alpine as a lightweight base with a reasonably small surface area for security concerns, but with enough functionality for development and interactive debugging.

Vault always runs under dumb-init, which handles reaping zombie processes and forwards signals on to all processes running in the container. This binary is built by HashiCorp and signed with our GPG key, so you can verify the signed package used to build a given base image.

Running the Vault container with no arguments will give you a Vault server in development mode. The provided entry point script will also look for Vault subcommands and run vault with that subcommand. For example, you can execute docker run vault status and it will run the vault status command inside the container. The entry point also adds some special configuration options as detailed in the sections below when running the server subcommand. Any other command gets exec-ed inside the container under dumb-init.

The container exposes two optional VOLUMEs:

  • /vault/logs, to use for writing persistent audit logs. By default nothing is written here; the file audit backend must be enabled with a path under this directory.
  • /vault/file, to use for writing persistent storage data when using thefile data storage plugin. By default nothing is written here (a dev server uses an in-memory data store); the file data storage backend must be enabled in Vault's configuration before the container is started.

The container has a Vault configuration directory set up at /vault/config and the server will load any HCL or JSON configuration files placed here by binding a volume or by composing a new image and adding files. Alternatively, configuration can be added by passing the configuration JSON via environment variable VAULT_LOCAL_CONFIG. Please note that due to a bug in the current release of Vault (0.6.0), you should not use the name local.json for any configuration file in this directory.

Memory Locking and 'setcap'

The container will attempt to lock memory to prevent sensitive values from being swapped to disk and as a result must have --cap-add=IPC_LOCK provided to docker run. Since the Vault binary runs as a non-root user, setcap is used to give the binary the ability to lock memory. With some Docker storage plugins in some distributions this call will not work correctly; it seems to fail most often with AUFS. The memory locking behavior can be disabled by setting the SKIP_SETCAP environment variable to any non-empty value.

Running Vault for Development

$ docker run --cap-add=IPC_LOCK -d --name=dev-vault vault

This runs a completely in-memory Vault server, which is useful for development but should not be used in production.

When running in development mode, two additional options can be set via environment variables:

  • VAULT_DEV_ROOT_TOKEN_ID: This sets the ID of the initial generated root token to the given value
  • VAULT_DEV_LISTEN_ADDRESS: This sets the IP:port of the development server listener (defaults to

As an example:

$ docker run --cap-add=IPC_LOCK -e 'VAULT_DEV_ROOT_TOKEN_ID=myroot' -e 'VAULT_DEV_LISTEN_ADDRESS=' vault

Running Vault in Server Mode

$ docker run --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend": {"file": {"path": "/vault/file"}}, "default_lease_ttl": "168h", "max_lease_ttl": "720h"}' vault server

This runs a Vault server using the file storage backend at path /vault/file, with a default secret lease duration of one week and a maximum of 30 days.

Note the --cap-add=IPC_LOCK: this is required in order for Vault to lock memory, which prevents it from being swapped to disk. This is highly recommended. In a non-development environment, if you do not wish to use this functionality, you must add "disable_mlock: true" to the configuration information.

At startup, the server will read configuration HCL and JSON files from /vault/config (any information passed into VAULT_LOCAL_CONFIG is written into local.json in this directory and read as part of reading the directory for configuration files). Please see Vault's configuration documentation for a full list of options.

Since 0.6.3 this container also supports the VAULT_REDIRECT_INTERFACE and VAULT_CLUSTER_INTERFACE environment variables. If set, the IP addresses used for the redirect and cluster addresses in Vault's configuration will be the address of the named interface inside the container (e.g. eth0).


View license information for the software contained in this image.

Docker Pull Command

Comments (9)
a month ago

Can someone remove those google tiny urls ??
@nemonik, yeah it seems we have to query vault using http and not https ... Were you able to fix the issue ?

a month ago

In answer to my question... it didn't like the SSL files being the ${pwd}/config/ssl

a month ago

Why is this happening on centos and OS X

docker run --name hashicorp-vault -p 8200:8200 --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend":{"file":{"path":"/vault/data"}},"listener":{"tcp":{"address":"","tls_cert_file":"/vault/config/ssl/server.pem","tls_key_file":"/vault/config/ssl/server-key.pem"}}}' -v $(pwd)/vault/logs/:/vault/logs -v $(pwd)/vault/data:/vault/data -v $(pwd)/vault/config:/vault/config vault server


docker run --name hashicorp-vault -p 8200:8200 --cap-add=IPC_LOCK -e 'VAULT_LOCAL_CONFIG={"backend":{"file":{"path":"/vault/data"}},"listener":{"tcp":{"address":"","tls_cert_file":"/vault/config/ssl/server.pem","tls_key_file":"/vault/config/ssl/server-key.pem"}}}' -v $(pwd)/vault/logs/:/vault/logs -v $(pwd)/vault/data:/vault/data -v $(pwd)/vault/config:/vault/config vault server -config=/vault/config/local.json


Error initializing listener of type tcp: listen tcp bind: address already in use
3 months ago

I ran the vault for development but when I run the command
docker run vault status, it gave the error of "Couldn't start vault with IPC_LOCK. Disabling IPC_LOCK, please use --privileged or --cap-add IPC_LOCK
Error checking seal status: Get dial tcp getsockopt: connection refused". I set the environment variable SKIP_SETCAP to a non-value but it didn't solve. When I run the status command like ;
docker exec [container id] vault status -address="". the command works and i can see the status command result. I couldn't understand what is wrong.

4 months ago

Is the compose file still valid?

4 months ago

How can we run this in server mode with docker swarm?

5 months ago

Would love if you guys would release new image tags to the repo at time of the announce mail!

6 months ago

Here's an example docker-compose.yml setup to get you started.

version: '2'
    image: vault
    container_name: myvault
      - ""
      - ./vault/file:/vault/file:rw
      - ./vault/config:/vault/config:rw
      - IPC_LOCK
    entrypoint: vault server -config=/vault/config/

Put this in your ./vault/config/vault.json file:

  "backend": {"file": {"path": "/vault/file"}},
  "listener": {"tcp": {"address": "", "tls_disable": 1}},
  "default_lease_ttl": "168h",
  "max_lease_ttl": "720h"

Once this is running, you need to run this:

docker-compose up -d myvault
export VAULT_ADDR=
vault init

The output to vault init will return a set of Unseal keys. These keys you will need to use (at least 3 of) to unseal the vault so you can use it.

You can unseal the vault using the vault unseal command, or even write these unseal commands into a bash script:


set -x

export VAULT_ADDR=

vault unseal p1hZDCNuUVSdIB...
vault unseal GxdiaXW2yN/6qw9...
vault unseal uulJzGs8bM2lbp64...

and hey presto...