flaviostutz/backtor
Backtor is a backup scheduler tool that uses Conductor workers to handle backup operations
243
Backtor is a backup scheduler tool that uses Conductor workers to handle backup operations.
It is focused on the scheduling part of a common backup routine, leaving the dirty storage job for specialized storage/database tools. You can use any backup backend by just implementing a simple Conductor worker for the tasks "backup" and "remove".
"backup" is called from time to time to create a new backup and "remove" task is launched for removing a previous backup that is not needed anymore, according to retention policy.
The triggering and retainment of backups are based on the functional perception of backups, so you configure:
Retention policies: for how long do a backup must be retained? It depends on what the user needs when something goes wrong. In general, the more recent, more backups in time you need. By default, Conductor will try to keep something like (if a backup is outside this, the "remove_backup" Workflow will be called):
Triggering cron string: cron string that defines when a new backup will be created (some help on cron strings: https://crontab.guru/examples.html). If no cron string is provided, it will be derived from the need of the retention policy by default.
Based on those retention parameters, Backtor will launch a "create_backup" workflow or a "remove_backup" workflow on Conductor in order to maintain what we need as a backup that can save our souls! The actual backup creation or removal is performed by Conductor workers specialized on the target backup storage/tool.
Check out another Conductor based tool that may be helpful for you:
Hope this can help you!
version: "3.5"
services:
backtor:
image: flaviostutz/backtor
restart: always
ports:
- 6000:6000
environment:
- LOG_LEVEL=debug
- CONDUCTOR_API_URL=http://backtor-conductor:8080/api
backtor-conductor:
image: flaviostutz/backtor-conductor
restart: always
ports:
- 8080:8080
environment:
- DYNOMITE_HOSTS=dynomite:8102:us-east-1c
- ELASTICSEARCH_URL=elasticsearch:9300
- LOADSAMPLE=false
- PROVISIONING_UPDATE_EXISTING_TASKS=false
dynomite:
image: flaviostutz/dynomite:0.7.5
restart: always
ports:
- 8102:8102
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:5.6.8
restart: always
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx1000m"
- transport.host=0.0.0.0
- discovery.type=single-node
- xpack.security.enabled=false
ports:
- 9200:9200
- 9300:9300
logging:
driver: "json-file"
options:
max-size: "20MB"
max-file: "5"
conductor-ui:
image: flaviostutz/conductor-ui
restart: always
environment:
- WF_SERVER=http://backtor-conductor:8080/api/
ports:
- 5000:5000
docker-compose up
and see logsGET /backup
[
{
"name": "backup72109432",
"enabled": 1,
"RunningCreateWorkflowID": "c0535ba5-f838-4de7-979b-f436a8a66b17",
"backupCronString": "0/2 * * * * *",
"lastUpdate": "2019-07-21T00:52:50.0846172Z",
"retentionMinutely": "0@L",
"retentionHourly": "0@L",
"retentionDaily": "4@L",
"retentionWeekly": "4@L",
"retentionMonthly": "3@L",
"retentionYearly": "2@L"
}
]
POST /backup
Create a new backup specification
Request body: json
{
name:{backup spec name},
enabled:{0 or 1}
fromDate:{iso date - from datetime to enable backup}
toDate:{iso date - to datetime to enable backup}
retentionHourly: {hourly policy}
retentionDaily: {"4@L" means "keep 4 daily backups that are taken on the last hour (L) of the day"}
retentionWeekly: {weekly policy}
retentionMonthly: {monthly policy}
retentionYearly: {yearly policy}
}
status code must be 201
request json:
PUT /backup/{name}
{name}
Examples:
Default backup
Simple daily backups
Every 4 hours backups
In order to the backups to take place, you have to implement, or use a ready made Worker Conductor for the following tasks:
"backup"
"remove"
Backtor has a /metrics endpoint compatible with Prometheus.
Please submit your issues and pull requests here!
docker pull flaviostutz/backtor