play-s3

by kaliber-scala

kaliber-scala / play-s3

S3 module for Play

122 Stars 48 Forks Last release: Not found MIT License 130 Commits 19 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

This repository is no longer maintained

Just create a fork, if you want I can list it here.


Amazon Simple Storage Service (S3) module for Play 2.6

A minimal S3 API wrapper. Allows you to list, get, add and remove items from a bucket.

Has some extra features that help with direct upload and authenticated url generation.

Note: this version uses the new aws 4 signer, this requires you to correctly set the region

Important changes

10.0.0 - Upgraded to Play 2.7


9.0.0 - Upgraded to Play 2.6 - Upgraded to Scala 2.12

8.0.0 - Upgraded to Play 2.5

7.0.0 - Organisation has been changed to 'net.kaliber' - Resolver (maven repository) has been moved -

fromConfig
and
fromConfiguration
methods have been renamed to
fromApplication
. Added
fromConfiguration
methods that can be used without access to an application (useful for application loaders introduced in Play 2.4)

Installation

  val appDependencies = Seq(
    "net.kaliber" %% "play-s3" % "9.0.0"

// use the following version for play 2.5
"net.kaliber" %% "play-s3" % "8.0.0"
// use the following version for play 2.4
"net.kaliber" %% "play-s3" % "7.0.2"
// use the following version for play 2.3
"nl.rhinofly" %% "play-s3" % "6.0.0"
// use the following version for play 2.2
//"nl.rhinofly" %% "play-s3" % "4.0.0"
// use the following version for play 2.1
//"nl.rhinofly" %% "play-s3" % "3.1.1"

)

// use the following for play 2.5 and 2.4

resolvers += "Kaliber Internal Repository" at "https://jars.kaliber.io/artifactory/libs-release-local"

// use the following for play 2.3 and below resolvers += "Rhinofly Internal Repository" at "http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local"

Configuration

application.conf
should contain the following information:
aws.accessKeyId=AmazonAccessKeyId
aws.secretKey=AmazonSecretKey

If you are hosting in a specific region that can be specified. If you are using another S3 implementation (like riakCS), you can customize the domain name and https usage with these values:

#default is us-east-1
s3.region="eu-west-1"
#default is determined by the region, see: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
s3.host="your.domain.name"
#default is true
s3.https=false
#default is true
#required in case dots are present in the bucket name and https is enabled
s3.pathStyleAccess=false

Usage

Getting a S3 instance:

val s3 = S3.fromApplication(playApplication)
// or
val s3 = S3.fromConfiguration(wsClient, playConfiguration)

Getting a bucket:

val bucket = s3.getBucket("bucketName")

Adding a file:

//not that acl and headers are optional, the default value for acl is set to PUBLIC_READ.

val result = bucket + BucketFile(fileName, mimeType, byteArray, acl, headers) //or val result = bucket add BucketFile(fileName, mimeType, byteArray, acl, headers)

result .map { unit => Logger.info("Saved the file") } .recover { case S3Exception(status, code, message, originalXml) => Logger.info("Error: " + message) }

Removing a file:

val result = bucket - fileName
//or
val result = bucket remove fileName

Retrieving a file:

val result = bucket get "fileName"

result.map { case BucketFile(name, contentType, content, acl, headers) => //... } //or val file = Await.result(result, 10 seconds) val BucketFile(name, contentType, content, acl, headers) = file

Listing the contents of a bucket:

val result = bucket.list

result.map { items => items.map { case BucketItem(name, isVirtual) => //... } }

//or using a prefix val result = bucket list "prefix"

Retrieving a private url:

val url = bucket.url("fileName", expirationFromNowInSeconds)

Renaming a file:

val result = bucket rename("oldFileName", "newFileName", ACL)

Multipart file upload:

// Retrieve an upload ticket
val result:Future[BucketFileUploadTicket] =
  bucket initiateMultipartUpload BucketFile(fileName, mimeType)

// Upload the parts and save the tickets val result:Future[BucketFilePartUploadTicket] = bucket uploadPart (uploadTicket, BucketFilePart(partNumber, content))

// Complete the upload using both the upload ticket and the part upload tickets val result:Future[Unit] = bucket completeMultipartUpload (uploadTicket, partUploadTickets)

Updating the ACL of a file:

val result:Future[Unit] = bucket updateACL ("fileName", ACL)

Retrieving the ACL of a file:

val result = testBucket.getAcl("private2README.txt")

for { aclList //... case Grant(READ, Group(uri)) => //... }

Browser upload helpers:

val `1 minute from now` = System.currentTimeMillis + (1 * 60 * 1000)

// import condition builders import fly.play.s3.upload.Condition._

// create a policy and set the conditions val policy = testBucket.uploadPolicy(expiration = new Date(1 minute from now)) .withConditions( key startsWith "test/", acl eq PUBLIC_READ, successActionRedirect eq expectedRedirectUrl, header(CONTENT_TYPE) startsWith "text/", meta("tag").any) .toPolicy

// import Form helper import fly.play.s3.upload.Form

val formFieldsFromPolicy = Form(policy).fields

// convert the form fields from the policy to an actial form formFieldsFromPolicy .map { case FormElement(name, value, true) => s"""""" case FormElement(name, value, false) => s"""""" }

// make sure you add the file form field as last val allFormFields = formFieldsFromPolicy.mkString("\n") + """"""

More examples can be found in the

S3Spec
in the
test
folder. In order to run the tests you need an
application.conf
file in the
test/conf
that looks like this:
aws.accessKeyId="..."
aws.secretKey="..."

s3.region="eu-west-1"

testBucketName=s3playlibrary.rhinofly.net

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.