Need help with docker-s3-volume?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

159 Stars 52 Forks MIT License 29 Commits 5 Opened issues


Docker container with a data volume from s3.

Services available


Need anything else?

Contributors list


Docker Build Status Docker Layers Count Docker Version Docker Pull Count Docker Stars

Creates a Docker container that is restored and backed up to a directory on s3. You could use this to run short lived processes that work with and persist data to and from S3.


For the simplest usage, you can just start the data container:

docker run -d --name my-data-container \
           elementar/s3-volume /data s3://mybucket/someprefix

This will download the data from the S3 location you specify into the container's

directory. When the container shuts down, the data will be synced back to S3.

To use the data from another container, you can use the

docker run -it --rm --volumes-from=my-data-container busybox ls -l /data

Configuring a sync interval

When the

environment variable is set, a watcher process will sync the
directory to S3 on the interval you specify. The interval can be specified in seconds, minutes, hours or days (adding
as the suffix):
docker run -d --name my-data-container -e BACKUP_INTERVAL=2m \
           elementar/s3-volume /data s3://mybucket/someprefix

Configuring credentials

If you are running on EC2, IAM role credentials should just work. Otherwise, you can supply credential information using environment variables:

docker run -d --name my-data-container \
           -e AWS_ACCESS_KEY_ID=... -e AWS_SECRET_ACCESS_KEY=... \
           elementar/s3-volume /data s3://mybucket/someprefix

Any environment variable available to the

command can be used. see for more information.

Configuring an endpoint URL

If you are using an S3-compatible service (such as Oracle OCI Object Storage), you may want to set the service's endpoint URL:

docker run -d --name my-data-container -e ENDPOINT_URL=... \
           elementar/s3-volume /data s3://mybucket/someprefix

Forcing a sync

A final sync will always be performed on container shutdown. A sync can be forced by sending the container the

docker kill --signal=USR1 my-data-container

Forcing a restoration

The first time the container is ran, it will fetch the contents of the S3 location to initialize the

directory. If you want to force an initial sync again, you can run the container again with the
docker run -d --name my-data-container \
           elementar/s3-volume --force-restore /data s3://mybucket/someprefix

Deletion and sync

By default if there are files that are deleted in your local file system, those will be deleted remotely. If you wish to turn this off, set the environment variable

to an empty string:
docker run -d -e S3_SYNC_FLAGS="" elementar/s3-volume /data s3://mybucket/someprefix

Using Compose and named volumes

Most of the time, you will use this image to sync data for another container. You can use

for that:
# docker-compose.yaml
version: "2"

volumes: s3data: driver: local

services: s3vol: image: elementar/s3-volume command: /data s3://mybucket/someprefix volumes: - s3data:/data db: image: postgres volumes: - s3data:/var/lib/postgresql/data


  1. Fork it!
  2. Create your feature branch:
    git checkout -b my-new-feature
  3. Commit your changes:
    git commit -am 'Add some feature'
  4. Push to the branch:
    git push origin my-new-feature
  5. Submit a pull request :D


  • Original Developer - Dave Newman (@whatupdave)
  • Current Maintainer - Fábio Batista (@fabiob)


This repository is released under the MIT license:


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.