Need help with retinal?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

209 Stars 29 Forks MIT License 232 Commits 22 Opened issues


🏙 Retinal is a Serverless AWS Lambda service for resizing images on-demand or event-triggered

Services available


Need anything else?

Contributors list

# 5,705
193 commits
# 241,274
6 commits
# 73,061
1 commit


Serverless Framework-based AWS Lambda function triggered by S3 events to resize images with the excellent Sharp module. By using the Sharp module (which uses the libvips library), image processing can be 3x-5x faster than using ImageMagick, thus reducing the time your function spends running, which can potentially dramatically decrease your lambda function's cost. The function's behaviour can be controlled entirely with configuration.

CircleCI Coveralls Codacy grade David David


  1. What is it?
  2. Installation
  3. Setup
  4. Testing
  5. Deployment
  6. Configuration
  7. Building
  8. Troubleshooting
  9. Change log

What is it?

A tool to take images uploaded to an S3 bucket and produce one or more images of varying sizes, optimizations and other operations all controlled from a simple configuration file. It does this by creating an AWS Lambda function with the help of the Serverless Framework.


Please note, currently the master branch is broken, please use v0.11.0 instead. See comment.

Installation can be achieved with the following commands

git clone
cd serverless-sharp-image
yarn install

(It is possible to exchange

is too hipster for your taste. No problem.)

Or, if you have

installed globally:
serverless install -u

Then, modify the

files, adapting them to your needs. More on configuration below.



You must configure your AWS credentials either by defining

environmental variables, or using an AWS profile. You can read more about this on the Serverless Credentials Guide. It's a bit of a pain in the ass if you have many projects/credentials.

In short, either:





Make sure the bucket in



yarn test

You can also try out the service by invoking it. First deploy it with

yarn run deploy
and then you can invoke your function with
yarn run invoke
. This will invoke the function with the test event in
. You may need to tweak this file to match your setup.


serverless deploy -v

This package bundles a lambda-execution-environment-ready version of the Sharp library which allows you to deploy the lambda function from any OS.


The lambda service is designed to be controlled by configuration. From the configuration you can setup how one or more images will be manipulated, with direct access to the underlying methods of Sharp for full control.

module.exports = {
  name: 'serverless-sharp-image',
  provider: {
    profile: 'CH-CH-CH-CHANGEME',
    stage: 'dev',
    region: 'us-east-1',
  sourceBucket: 'my-sweet-unicorn-media',
  sourcePrefix: 'originals/',
  destinationBucket: 'my-sweet-unicorn-media',
  destinationPrefix: 'web-ready/',
  all: [['rotate'], ['toFormat', 'jpeg', { quality: 80 }]],
  outputs: [
      key: '%(filename)s-200x200.jpg',
      params: {
        ACL: 'public-read',
      operations: [['resize', 200, 200], ['max'], ['withoutEnlargement']],
      key: '%(filename)s-100x100.jpg',
      operations: [['resize', 100, 100], ['max'], ['withoutEnlargement']],

TODO: document configuration better/more detail

all - applied to the image before creating all the outputs

outputs - define the files you wish to generate from the source

  • key: uses sprintf internally
  • params: set some specific S3 options for the image when uploaded to the destination S3 bucket. See more about the param options on the AWS S3's upload method documentation
  • operations: Lists of Sharp's methods you want performed on your image. For example if you want to perform the Sharp method
    sharp(image).resize(200, 300)
    you would define this in your configuration as
    ["resize", 200, 300]
    Note that method's are performed in order they appear in the configuration, and differing order can produce different results.

Available placeholders for use in the output S3 object's key

  • key - The full object key with which the service was invoked

Example: - Given object key:


- "unicorns/and/pixie/sticks/omg.jpg"
  • type - The Content-Type of the object, as returned by S3

Example: - Given Content-Type:


- "image/jpeg"
  • crumbs - The crumbs of the S3 object as an array (e.g. the object key split by "/", not including the filename)

Example: - Given object key:


- "unicorns"
- "pixie"
  • directory - The "directory" of the S3 object

- Given object key:


- "unicorns/and/pixie/sticks"
  • filename - The file name (minus the last extension)

- Given object key:


- "omg"
  • extension - The file's extension determined by the Content-Type returned by S3

- Given Content-Type:


- "png"


Although not necessary (it's pre-packaged/included), if you'd like, you can build the sharp module native binaries for Lambda yourself with:

yarn build:sharp

This requires that you have Docker installed and running. More info here.


How can I use an existing bucket for my original images and processed output images? By default, Serverless tries to provision all the necessary resources required by the lambda function by creating a stack in AWS CloudFormation. To use existing buckets, first remove the `s3` event section from the `serverless.yml` configuration file in the `` configuration, then remove the entire `resources` section from the `serverless.yml` file. Alternatively, if you'd like to use an existing bucket for the original image, but have a new processed-images output bucket created, only remove the s3 event section in `serverless.yml`.
How can I use the same bucket for both the source and destination? To do this, remove the `imageDestinationBucket` section from the `resources` section in `serverless.yml`.
I keep getting a timeout error when deploying and it's really annoying. Indeed, that is annoying. I had the same problem, and so that's why it's now here in this troubleshooting section. This may be an issue in the underlying AWS SDK when using a slower Internet connection. Try changing the `AWS_CLIENT_TIMEOUT` environment variable to a higher value. For example, in your command prompt enter the following and try deploying again:
export AWS_CLIENT_TIMEOUT=3000000
Wait, doesn't Sharp use libvips and node-gyp and therefore need to be compiled in an environment similar to the Lambda execution environment? Yes; that is true. But, it's kind of annoying to have to log into an EC2 instance just to deploy this lambda function, so we've bundled a pre-built version of Sharp and add it to the deployment bundle right before deploying. It was built on an EC2 instance running *Amazon Linux AMI 2015.09.1 x86_64 HVM GP2* - amzn-ami-hvm-2016.03.3.x86_64-gp2 (ami-6869aa05 in us-east-1). You can take a look at it in `lib/sharp-*.tar.gz`.
I got this error when installing: `Error: Python executable "/Users/**/miniconda3/bin/python" is v3.5.2, which is not supported by gyp.` What do I do? - Make sure you've got a recent version of `npm` installed. - Make sure you've got a recent version of node-gyp installed. You can do `npm install node-gyp -g` to make sure, but try the next steps first without doing this. - Set the path to python2 on your system. For example: `npm config set python /usr/bin/python2.7` - Having done the above, delete the `node_modules` directory in the project, and reinstall with `yarn install`
I got this error when deploying: `An error occurred while provisioning your stack: imageDestinationBucket` This means that the S3 bucket you configured for the `destinationBucket` (where processed images are uploaded) already exists in S3. To use an existing `imageDestinationBucket` simply remove the `imageDestinationBucket` section from the `resources` list in `serverless.yml`. See also [this question](#EMTBwg).
Aaaaaarggghhhhhh!!! Uuurrrggghhhhhh! Have you tried [filing an Issue](

Change log


We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.