amsdard/s3-sync
Allows to sync local data with s3 but both ways.
2.4K
Docker container that synchronize specified data volumen with s3 bucket using s3cmd sync and cron. If the S3_RESTORE_PATH is empty, script will fetch data from s3 to local destination, however if S3_RESTORE_PATH already contains data script will skip fetching s3 data.
docker run -d [OPTIONS] amsdard/s3-sync
- ACCESS_KEY=<AWS_KEY>
: Your AWS key.- SECRET_KEY=<AWS_SECRET>
: Your AWS secret.- S3_PATH=s3://<BUCKET_NAME>/<PATH>/
: S3 Bucket name and path. Should end with trailing slash.- S3_RESTORE_PATH=s3://<BUCKET_NAME>/<PATH>/
: S3 path from which we want to restore our data, defaults to S3_PATH Should end with trailing slash.- DATA_PATH=/<path to data>/
: container's data folder. Should end with trailing slash.- OWNER_UID=1000
: will change ownership of downloaded files to user with id=1000- OWNER_GID=1000
: will change group of downloaded files to group with id=1000- 'CRON_SCHEDULE=0 1 * * *'
: specifies when cron job starts (details). Default is 0 1 * * *
(runs every day at 1:00 am).- S3_GET_PARAMS
: parameters to pass to the get command (full list here).- S3_SYNC_PARAMS="--dry-run"
: parameters to pass to the sync command (full list here).- MAKE_DATETIME_SNAPSHOTS
: true|false, setting this to true will additionally sent files to {bucket_path}_snapshots/{yyyy-mm-dd}/{hh-mm} location.- COMPRESS=false
: whether or not archive data before backup- COMPRESS_PARAMS=--exclude=docker
: will exclude docker from archive. Must be relative path to DATA_PATH- PASSWORD=''
: password to encrypt archive. Compress must be set to true.- BACKUP_AT_START='false'
: will backup data as start- RESTORE_AT_START='true'
: will restore data at start--delete-removed --exclude .ssh/* --exclude .docker/* --exclude workspace/*
Run upload to S3 everyday at 12:00pm:
docker run -d \
-e ACCESS_KEY=myawskey \
-e SECRET_KEY=myawssecret \
-e S3_PATH=s3://my-bucket/backup/ \
-e 'CRON_SCHEDULE=0 12 * * *' \
-v /home/user/data:/data:ro \
amsdard/s3-sync
It is super important to add / at the and of paths, otherwise thing will get messy. In example:
s3-sync:
image: amsdard/s3-sync
environment:
- ACCESS_KEY=myawskey
- SECRET_KEY=myawssecret
- S3_PATH=s3://my-bucket/files/
- S3_RESTORE_PATH=s3://my-bucket/files/
- DATA_PATH=/app/files/
- CRON_SCHEDULE=*/1 * * * *
- OWNER_UID=1000
- OWNER_GID=1000
volumes_from:
- data
Will download content of s3://my-bucket/files/ into directory /app/files whereelse s3://my-bucket/files would download "file" directory into "/app/files" resulting in creating "/app/files/files"
DATA_PATH also needs to end with "/", if you want content of it be uploaded to specified s3 path. So "/app/files/" will upload all files from this dirctory into s3://my-bucket/files/ whereelse "/app/files" would upload "files" directory into s3://my-bucket/files/ resulting in s3://my-bucket/files/files being created.
docker pull amsdard/s3-sync