Public | Automated Build

Last pushed: 2 years ago
Short Description
Backup files to S3 (includes tar, gzip & optional GPG encryption)
Full Description


This Docker image features

  • backup files to Amazon S3 (including tar & gzip)
  • restore a backup
  • optional: encrypt the backup file before uploading to S3 / decrypt after downloading from S3


tar & gzip files in volume "/data", upload to S3 bucket

docker run --rm \
  -v /var/myapp/data:/data:ro \
  -e S3_PATH=s3://mybucket/myapp/ \
  rori/backup-to-s3 \

If you want to encrypt the backup file, additionally provide the following 2 parameters

-v /mykeys:/gpgkey \
-e GPGKEY_FILE=/gpgkey/public.key \

The environment variable GPGKEY_FILE refers to your public GPG key. Therefore you have to provide a Docker volume /gpgkey, containing your public key.


Download from S3 bucket, un-gzip & untar files to volume "/data"

docker run --rm \
  -v /var/myapp/data:/data \
  -e S3_PATH=s3://mybucket/myapp/2016-01-23_22-40-42.tar.gz \
  rori/backup-to-s3 \

The environment variable S3_PATH refers to the backup file in your S3 bucket.

If you want to decrypt the backup, please provide 3 additional parameters:

-v /mykeys/private:/gpgkey \
-e GPGKEY_FILE=/gpgkey/private.key \
-it \

The environment variable GPGKEY_FILE refers to your private GPG key. Therefore you have to provide a Docker volume /gpgkey, containing the private key. The container will ask for your private key passphrase.

If you don't like the fact, that a Docker container uses your private key, just run the command without the decrypt parameters and decrypt, un-gzip & untar the backup file yourself. The container won't be offended :-)

Periodic backups

The image does not run periodic backups itself. You can achieve this by running cron jobs or systemd timers.

Docker Pull Command
Source Repository