Manages streaming of data to AWS S3 without knowing the size beforehand and without keeping it all in memory or writing to disk.
This library allows you to efficiently stream large amounts of data to AWS S3 in Java without having to store the whole object in memory or use files. The S3 API requires that a content length be set before starting uploading, which is a problem when you want to calculate a large amount of data on the fly. The standard Java AWS SDK will simply buffer all the data in memory so that it can calculate the length, which consumes RAM and delays the upload. You can write the data to a temporary file but disk IO is slow (if your data is already in a file, using this library is pointless). This library provides an
OutputStreamthat packages data written to it into chunks which are sent in a multipart upload. You can also use several streams and upload the data in parallel.
The entrypoint is the class
StreamTransferManager. Read more in the javadoc, including a usage example.
This is available from maven central.
checkIntegrity()method thanks to @gkolakowski-ias. This allows verifying the upload with MD5 hashes.
Errors (e.g. OOM) to bubble up unchanged.
checkSize()method is now private as the user no longer needs to call it. You can remove all calls to it.