Need help with s3-sync-action?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

jakejarvis
460 Stars 196 Forks MIT License 37 Commits 35 Opened issues

Description

🔄 GitHub Action to sync a directory with a remote S3 bucket 🧺

Services available

!
?

Need anything else?

Contributors list

# 12,770
github-...
TypeScr...
shodan
securit...
30 commits
# 184,554
JavaScr...
headles...
HTML
aws-s3
3 commits
# 186,430
aws-s3
github-...
ci
JavaScr...
3 commits
# 11,683
Ruby
Rails
activea...
ruby-on...
1 commit

GitHub Action to Sync S3 Bucket 🔄

This simple action uses the vanilla AWS CLI to sync a directory (either from your repository or generated during your workflow) with a remote S3 bucket.

Usage

workflow.yml
Example

Place in a

.yml
file such as this one in your
.github/workflows
folder. Refer to the documentation on workflow YAML syntax here.

As of v0.3.0, all

aws s3 sync
flags are optional to allow for maximum customizability (that's a word, I promise) and must be provided by you via

args:
.

The following example includes optimal defaults for a public static website:

  • --acl public-read
    makes your files publicly readable (make sure your bucket settings are also set to public).
  • --follow-symlinks
    won't hurt and fixes some weird symbolic link problems that may come up.
  • Most importantly,
    --delete
    permanently deletes files in the S3 bucket that are not present in the latest version of your repository/build.
  • Optional tip: If you're uploading the root of your repository, adding
    --exclude '.git/*'
    prevents your
    .git
    folder from syncing, which would expose your source code history if your project is closed-source. (To exclude more than one pattern, you must have one
    --exclude
    flag per exclusion. The single quotes are also important!)
name: Upload Website

on: push: branches: - master

jobs: deploy: runs-on: ubuntu-latest steps: - uses: actions/[email protected] - uses: jakejarvis/[email protected] with: args: --acl public-read --follow-symlinks --delete env: AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }} AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }} AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }} AWS_REGION: 'us-west-1' # optional: defaults to us-east-1 SOURCE_DIR: 'public' # optional: defaults to entire repository

Configuration

The following settings must be passed as environment variables as shown in the example. Sensitive information, especially

AWS_ACCESS_KEY_ID
and
AWS_SECRET_ACCESS_KEY
, should be set as encrypted secrets — otherwise, they'll be public to anyone browsing your repository's source code and CI logs.

| Key | Value | Suggested Type | Required | Default | | ------------- | ------------- | ------------- | ------------- | ------------- | |

AWS_ACCESS_KEY_ID
| Your AWS Access Key. More info here. |
secret env
| Yes | N/A | |
AWS_SECRET_ACCESS_KEY
| Your AWS Secret Access Key. More info here. |
secret env
| Yes | N/A | |
AWS_S3_BUCKET
| The name of the bucket you're syncing to. For example,
jarv.is
or
my-app-releases
. |
secret env
| Yes | N/A | |
AWS_REGION
| The region where you created your bucket. Set to
us-east-1
by default. Full list of regions here. |
env
| No |
us-east-1
| |
AWS_S3_ENDPOINT
| The endpoint URL of the bucket you're syncing to. Can be used for VPC scenarios or for non-AWS services using the S3 API, like DigitalOcean Spaces. |
env
| No | Automatic (
s3.amazonaws.com
or AWS's region-specific equivalent) | |
SOURCE_DIR
| The local directory (or file) you wish to sync/upload to S3. For example,
public
. Defaults to your entire repository. |
env
| No |
./
(root of cloned repository) | |
DEST_DIR
| The directory inside of the S3 bucket you wish to sync/upload to. For example,
my_project/assets
. Defaults to the root of the bucket. |
env
| No |
/
(root of bucket) |

License

This project is distributed under the MIT license.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.