Docker container with a data volume from s3.
Creates a Docker container that is restored and backed up to a directory on s3. You could use this to run short lived processes that work with and persist data to and from S3.
For the simplest usage, you can just start the data container:
docker run -d --name my-data-container \ elementar/s3-volume /data s3://mybucket/someprefix
This will download the data from the S3 location you specify into the container's
/datadirectory. When the container shuts down, the data will be synced back to S3.
To use the data from another container, you can use the
--volumes-fromoption:
docker run -it --rm --volumes-from=my-data-container busybox ls -l /data
When the
BACKUP_INTERVALenvironment variable is set, a watcher process will sync the
/datadirectory to S3 on the interval you specify. The interval can be specified in seconds, minutes, hours or days (adding
s,
m,
hor
das the suffix):
docker run -d --name my-data-container -e BACKUP_INTERVAL=2m \ elementar/s3-volume /data s3://mybucket/someprefix
If you are running on EC2, IAM role credentials should just work. Otherwise, you can supply credential information using environment variables:
docker run -d --name my-data-container \ -e AWS_ACCESS_KEY_ID=... -e AWS_SECRET_ACCESS_KEY=... \ elementar/s3-volume /data s3://mybucket/someprefix
Any environment variable available to the
aws-clicommand can be used. see http://docs.aws.amazon.com/cli/latest/userguide/cli-environment.html for more information.
If you are using an S3-compatible service (such as Oracle OCI Object Storage), you may want to set the service's endpoint URL:
docker run -d --name my-data-container -e ENDPOINT_URL=... \ elementar/s3-volume /data s3://mybucket/someprefix
A final sync will always be performed on container shutdown. A sync can be forced by sending the container the
USR1signal:
docker kill --signal=USR1 my-data-container
The first time the container is ran, it will fetch the contents of the S3 location to initialize the
/datadirectory. If you want to force an initial sync again, you can run the container again with the
--force-restoreoption:
docker run -d --name my-data-container \ elementar/s3-volume --force-restore /data s3://mybucket/someprefix
By default if there are files that are deleted in your local file system, those will be deleted remotely. If you wish to turn this off, set the environment variable
S3_SYNC_FLAGSto an empty string:
docker run -d -e S3_SYNC_FLAGS="" elementar/s3-volume /data s3://mybucket/someprefix
Most of the time, you will use this image to sync data for another container. You can use
docker-composefor that:
# docker-compose.yaml version: "2"volumes: s3data: driver: local
services: s3vol: image: elementar/s3-volume command: /data s3://mybucket/someprefix volumes: - s3data:/data db: image: postgres volumes: - s3data:/var/lib/postgresql/data
git checkout -b my-new-feature
git commit -am 'Add some feature'
git push origin my-new-feature
This repository is released under the MIT license: