Need help with fflate?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

551 Stars 16 Forks MIT License 90 Commits 1 Opened issues


High performance (de)compression in an 8kB package

Services available


Need anything else?

Contributors list

# 37,950
86 commits


High performance (de)compression in an 8kB package

Why fflate?

(short for fast flate) is the fastest, smallest, and most versatile pure JavaScript compression and decompression library in existence, handily beating
, and
in performance benchmarks while being multiple times more lightweight. Its compression ratios are often better than even the original Zlib C library. It includes support for DEFLATE, GZIP, and Zlib data. Data compressed by
can be decompressed by other tools, and vice versa.

In addition to the base decompression and compression APIs,

supports high-speed ZIP file archiving for an extra 3 kB. In fact, the compressor, in synchronous mode, compresses both more quickly and with a higher compression ratio than most compression software (even Info-ZIP, a C program), and in asynchronous mode it can utilize multiple threads to achieve over 3x the performance of any other utility.

| |

| |-----------------------------|--------|------------------------|-----------------------|--------------------------------| | Decompression performance | 1x | Up to 40% slower | Up to 40% faster | Up to 40% faster | | Compression performance | 1x | N/A | Up to 5% faster | Up to 50% faster | | Base bundle size (minified) | 45.6kB | 3kB (inflate only) | 14.2kB | 8kB (3kB for inflate only) | | Compression support | ✅ | ❌ | ✅ | ✅ | | Thread/Worker safe | ✅ | ✅ | ❌ | ✅ | | ZIP support | ❌ | ❌ | ✅ | ✅ | | Streaming support | ✅ | ❌ | ❌ | ✅ | | GZIP/Zlib support | ✅ | ❌ | ❌ | ✅ | | Supports files up to 4GB | ✅ | ❌ | ❌ | ✅ | | Doesn't hang on error | ✅ | ❌ | ❌ | ✅ | | Multi-thread/Asynchronous | ❌ | ❌ | ❌ | ✅ | | Streaming ZIP support | ❌ | ❌ | ❌ | ✅ | | Uses ES Modules | ❌ | ❌ | ❌ | ✅ |


If you'd like to try

for yourself without installing it, you can take a look at the browser demo. Since
is a pure JavaScript library, it works in both the browser and Node.js (see Browser support for more info).



npm i fflate # or yarn add fflate, or pnpm add fflate

Import: ```js // I will assume that you use the following for the rest of this guide import * as fflate from 'fflate';

// However, you should import ONLY what you need to minimize bloat. // So, if you just need GZIP compression support: import { gzipSync } from 'fflate'; // Woo! You just saved 20 kB off your bundle with one line. ```

If your environment doesn't support ES Modules (e.g. Node.js):

// Try to avoid this when using fflate in the browser, as it will import
// all of fflate's components, even those that you aren't using.
const fflate = require('fflate');

If you want to load from a CDN in the browser: ```html <!-- You should use either UNPKG or jsDelivr (i.e. only one of the following)

Note that tree shaking is completely unsupported from the CDN. If you want a small build without build tools, please ask me and I will make one manually with only the features you need. This build is about 27kB, or 9kB gzipped.

You may also want to specify the version, e.g. with [email protected] --> <!-- Now, the global variable fflate contains the library -->

If you are using Deno:
// Don't use the ?dts Skypack flag; it isn't necessary for Deno support
// The @deno-types comment adds TypeScript typings

// @deno-types=""
import * as fflate from '';

If your environment doesn't support bundling: ```js // Again, try to import just what you need

// For the browser: import * as fflate from 'fflate/esm/browser.js'; // If the standard ESM import fails on Node (i.e. older version): import * as fflate from 'fflate/esm'; ```

And use: ```js // This is an ArrayBuffer of data const massiveFileBuf = await fetch('/aMassiveFile').then( res => res.arrayBuffer() ); // To use fflate, you need a Uint8Array const massiveFile = new Uint8Array(massiveFileBuf); // Note that Node.js Buffers work just fine as well: // const massiveFile = require('fs').readFileSync('aMassiveFile.txt');

// Higher level means lower performance but better compression // The level ranges from 0 (no compression) to 9 (max compression) // The default level is 6 const notSoMassive = fflate.zlibSync(massiveFile, { level: 9 }); const massiveAgain = fflate.unzlibSync(notSoMassive); const gzipped = fflate.gzipSync(massiveFile, { // GZIP-specific: the filename to use when decompressed filename: 'aMassiveFile.txt', // GZIP-specific: the modification time. Can be a Date, date string, // or Unix timestamp mtime: '9/1/16 2:00 PM' });

`fflate` can autodetect a compressed file's format as well:
js const compressed = new Uint8Array( await fetch('/GZIPorZLIBorDEFLATE').then(res => res.arrayBuffer()) ); // Above example with Node.js Buffers: // Buffer.from('H4sIAAAAAAAAE8tIzcnJBwCGphA2BQAAAA==', 'base64');

const decompressed = fflate.decompressSync(compressed); ```

Using strings is easy with

's string conversion API: ```js const buf = fflate.strToU8('Hello world!');

// The default compression method is gzip // Increasing mem may increase performance at the cost of memory // The mem ranges from 0 to 12, where 4 is the default const compressed = fflate.compressSync(buf, { level: 6, mem: 8 });

// When you need to decompress: const decompressed = fflate.decompressSync(compressed); const origText = fflate.strFromU8(decompressed); console.log(origText); // Hello world! ```

If you need to use an (albeit inefficient) binary string, you can set the second argument to

. ```js const buf = fflate.strToU8('Hello world!');

// The second argument, latin1, is a boolean that indicates that the data // is not Unicode but rather should be encoded and decoded as Latin-1. // This is useful for creating a string from binary data that isn't // necessarily valid UTF-8. However, binary strings are incredibly // inefficient and tend to double file size, so they're not recommended. const compressedString = fflate.strFromU8( fflate.compressSync(buf), true ); const decompressed = fflate.decompressSync( fflate.strToU8(compressedString, true) ); const origText = fflate.strFromU8(decompressed); console.log(origText); // Hello world! ```

You can use streams as well to incrementally add data to be compressed or decompressed: ```js // This example uses synchronous streams, but for the best experience // you'll definitely want to use asynchronous streams.

let outStr = ''; const gzipStream = new fflate.Gzip({ level: 9 }, (chunk, isLast) => { // accumulate in an inefficient binary string (just an example) outStr += fflate.strFromU8(chunk, true); });

// You can also attach the data handler separately if you don't want to // do so in the constructor. gzipStream.ondata = (chunk, final) => { ... }

// Since this is synchronous, all errors will be thrown by stream.push() gzipStream.push(chunk1); gzipStream.push(chunk2);


// You should mark the last chunk by using true in the second argument // In addition to being necessary for the stream to work properly, this // will also set the isLast parameter in the handler to true. gzipStream.push(lastChunk, true);

console.log(outStr); // The compressed binary string is now available

// The options parameter for compression streams is optional; you can // provide one parameter (the handler) or none at all if you set // deflateStream.ondata later. const deflateStream = new fflate.Deflate((chunk, final) => { console.log(chunk, final); });

// If you want to create a stream from strings, use EncodeUTF8 const utfEncode = new fflate.EncodeUTF8((data, final) => { // Chaining streams together is done by pushing to the // next stream in the handler for the previous stream deflateStream.push(data, final); });

utfEncode.push('Hello'.repeat(1000)); utfEncode.push(' '.repeat(100)); utfEncode.push('world!'.repeat(10), true);

// The deflateStream has logged the compressed data

const inflateStream = new fflate.Inflate(); inflateStream.ondata = (decompressedChunk, final) => { ... };

let stringData = '';

// Streaming UTF-8 decode is available too const utfDecode = new fflate.DecodeUTF8((data, final) => { stringData += data; });

// Decompress streams auto-detect the compression method, as the // non-streaming decompress() method does. const dcmpStrm = new fflate.Decompress((chunk, final) => { console.log(chunk, 'was encoded with GZIP, Zlib, or DEFLATE'); utfDecode.push(chunk, final); });

dcmpStrm.push(zlibJSONData1); dcmpStrm.push(zlibJSONData2, true);

// This succeeds; the UTF-8 decoder chained with the unknown compression format // stream to reach a string as a sink. console.log(JSON.parse(stringData)); ```

You can create multi-file ZIP archives easily as well. Note that by default, compression is enabled for all files, which is not useful when ZIPping many PNGs, JPEGs, PDFs, etc. because those formats are already compressed. You should either override the level on a per-file basis or globally to avoid wasting resources. ```js // Note that the asynchronous version (see below) runs in parallel and // is much (up to 3x) faster for larger archives. const zipped = fflate.zipSync({ // Directories can be nested structures, as in an actual filesystem 'dir1': { 'nested': { // You can use Unicode in filenames '你好.txt': std('Hey there!') }, // You can also manually write out a directory path 'other/tmp.txt': new Uint8Array([97, 98, 99, 100]) }, // You can also provide compression options 'myImageData.bmp': [aMassiveFile, { level: 9, mem: 12, // ZIP-specific: mtime works here too, defaults to current time mtime: new Date('10/20/2020') }], // PNG is pre-compressed; no need to waste time 'superTinyFile.png': [aPNGFile, { level: 0 }] }, { // These options are the defaults for all files, but file-specific // options take precedence. level: 1, // Obfuscate mtime by default mtime: 0 });

// If you write the zipped data to and unzip, the folder // structure will be outputted as:

// (original file) // dir1 // |-> nested // | |-> 你好.txt // |-> other // | |-> tmp.txt // myImageData.bmp // superTinyFile.bin

// When decompressing, folders are not nested; all filepaths are fully // written out in the keys. For example, the return value may be: // { 'nested/directory/a2.txt': Uint8Array(2) [97, 97] }) const decompressed = fflate.unzipSync(zipped); ```

If you need extremely high performance or custom ZIP compression formats, you can use the highly-extensible ZIP streams. They take streams as both input and output. You can even use custom compression/decompression algorithms from other libraries, as long as they are defined in the ZIP spec (see section 4.4.5). If you'd like more info on using custom compressors, feel free to ask. ```js // ZIP object // Can also specify zip.ondata outside of the constructor const zip = new fflate.Zip((err, dat, final) => { if (!err) { // output of the streams console.log(dat, final); } });

const helloTxt = new fflate.ZipDeflate('hello.txt', { level: 9 });

// Always add streams to ZIP archives before pushing to those streams zip.add(helloTxt);

helloTxt.push(chunk1); // Last chunk helloTxt.push(chunk2, true);

// ZipPassThrough is like ZipDeflate with level 0, but allows for tree shaking const nonStreamingFile = new fflate.ZipPassThrough('test.png'); zip.add(nonStreamingFile); // If you have data already loaded, just .push(data, true) nonStreamingFile.push(pngData, true);

// You need to call .end() after finishing // This ensures the ZIP is valid zip.end();

// Unzip object const unzipper = new fflate.Unzip();

// This function will almost always have to be called. It is used to support // compression algorithms such as BZIP2 or LZMA in ZIP files if just DEFLATE // is not enough (though it almost always is). // If your ZIP files are not compressed, this line is not needed. unzipper.register(fflate.UnzipInflate);

const neededFiles = ['file1.txt', 'example.json'];

// Can specify handler in constructor too unzipper.onfile = file => { // is a string, file is a stream if (neededFiles.includes( { file.ondata = (err, dat, final) => { // Stream output here console.log(dat, final); };


// File sizes are sometimes not set if the ZIP file did not encode // them, so you may want to check that file.size != undefined console.log('Compressed size', file.size); console.log('Decompressed size', file.originalSize);

// You should only start the stream if you plan to use it to improve // performance. Only after starting the stream will ondata be called. // This method will throw if the compression method hasn't been registered file.start();

} };

// Try to keep under 5,000 files per chunk to avoid stack limit errors // For example, if all files are a few kB, multi-megabyte chunks are OK // If files are mostly under 100 bytes, 64kB chunks are the limit unzipper.push(zipChunk1); unzipper.push(zipChunk2); unzipper.push(zipChunk3, true); ```

As you may have guessed, there is an asynchronous version of every method as well. Unlike most libraries, this will cause the compression or decompression run in a separate thread entirely and automatically by using Web (or Node) Workers (as of now, Deno is unsupported). This means that the processing will not block the main thread at all.

Note that there is a significant initial overhead to using workers of about 70ms, so it's best to avoid the asynchronous API unless necessary. However, if you're compressing multiple large files at once, or the synchronous API causes the main thread to hang for too long, the callback APIs are an order of magnitude better. ```js import { gzip, zlib, AsyncGzip, zip, unzip, strFromU8, Zip, AsyncZipDeflate, Unzip, AsyncUnzipInflate } from 'fflate';

// Workers will work in almost any browser (even IE11!) // However, they fail below Node v12 without the --experimental-worker // CLI flag, and will fail entirely on Node below v10.

// All of the async APIs use a node-style callback as so: const terminate = gzip(aMassiveFile, (err, data) => { if (err) { // The compressed data was likely corrupt, so we have to handle // the error. return; } // Use data however you like console.log(data.length); });

if (needToCancel) { // The return value of any of the asynchronous APIs is a function that, // when called, will immediately cancel the operation. The callback // will not be called. terminate(); }

// If you wish to provide options, use the second argument.

// The consume option will render the data inside aMassiveFile unusable, // but can improve performance and dramatically reduce memory usage. zlib(aMassiveFile, { consume: true, level: 9 }, (err, data) => { // Use the data });

// Asynchronous streams are similar to synchronous streams, but the // handler has the error that occurred (if any) as the first parameter, // and they don't block the main thread.

// Additionally, any buffers that are pushed in will be consumed and // rendered unusable; if you need to use a buffer you push in, you // should clone it first. const gzs = new AsyncGzip({ level: 9, mem: 12, filename: 'hello.txt' }); let wasCallbackCalled = false; gzs.ondata = (err, chunk, final) => { // Note the new err parameter if (err) { // Note that after this occurs, the stream becomes corrupt and must // be discarded. You can't continue pushing chunks and expect it to // work. console.error(err); return; } wasCallbackCalled = true; } gzs.push(chunk);

// Since the stream is asynchronous, the callback will not be called // immediately. If such behavior is absolutely necessary (it shouldn't // be), use synchronous streams. console.log(wasCallbackCalled) // false

// To terminate an asynchronous stream's internal worker, call // stream.terminate(). gzs.terminate();

// This is way faster than zipSync because the compression of multiple // files runs in parallel. In fact, the fact that it's parallelized // makes it faster than most standalone ZIP CLIs. The effect is most // significant for multiple large files; less so for many small ones. zip({ f1: aMassiveFile, 'f2.txt': anotherMassiveFile }, { // The options object is still optional, you can still do just // zip(archive, callback) level: 6, mtime: 0 }, (err, data) => { // Save the ZIP file });

// unzip is the only async function without support for consume option // Also parallelized, so unzip is also often much faster than unzipSync unzip(aMassiveZIPFile, (err, unzipped) => { // If the archive has data.xml, log it here console.log(unzipped['data.xml']); // Conversion to string console.log(strFromU8(unzipped['data.xml'])) });

// Streaming ZIP archives can accept asynchronous streams. This automatically // uses multicore compression. const zip = new Zip(); zip.ondata = (err, chunk, final) => { ... }; // The JSON and BMP are compressed in parallel const exampleFile = new AsyncZipDeflate('example.json'); exampleFile.push(JSON.stringify({ large: 'object' }), true); const exampleFile2 = new AsyncZipDeflate('example2.bmp', { level: 9 }); exampleFile.push(ec2a); exampleFile.push(ec2b); exampleFile.push(ec2c); ... exampleFile.push(ec2Final, true); zip.end();

// Streaming Unzip should register the asynchronous inflation algorithm // for parallel processing. const unzip = new Unzip(stream => { if ('.json')) { stream.ondata = (err, chunk, final) => { ... }; stream.start();

if (needToCancel) {
  // To cancel these streams, call .terminate()

} }); unzip.register(AsyncUnzipInflate); unzip.push(data, true); ```

See the documentation for more detailed information about the API.

Bundle size estimates


uses ES Modules, this table should give you a general idea of
's bundle size for the features you need. The maximum bundle size that is possible with
is about 27kB if you use every single feature, but feature parity with
is only around 10kB (as opposed to 45kB from
). If your bundle size increases dramatically after adding
, please create an issue.

| Feature | Bundle size (minified) | Nearest competitor | |-------------------------|--------------------------------|------------------------| | Decompression | 3kB |

| | Compression | 5kB |
, 184% larger | | Async decompression | 4kB (1kB + raw decompression) | N/A | | Async compression | 6kB (1kB + raw compression) | N/A | | ZIP decompression | 5kB (2kB + raw decompression) |
, 184% larger | | ZIP compression | 7kB (2kB + raw compression) |
, 103% larger | | GZIP/Zlib decompression | 4kB (1kB + raw decompression) |
, 1040% larger | | GZIP/Zlib compression | 5kB (1kB + raw compression) |
, 812% larger | | Streaming decompression | 4kB (1kB + raw decompression) |
, 1040% larger | | Streaming compression | 5kB (1kB + raw compression) |
, 812% larger |

What makes
so fast?

Many JavaScript compression/decompression libraries exist. However, the most popular one,

, is merely a clone of Zlib rewritten nearly line-for-line in JavaScript. Although it is by no means poorly made,

doesn't recognize the many differences between JavaScript and C, and therefore is suboptimal for performance. Moreover, even when minified, the library is 45 kB; it may not seem like much, but for anyone concerned with optimizing bundle size (especially library authors), it's more weight than necessary.

Note that there exist some small libraries like

for solely decompression, and with a minified size of 3 kB, it can be appealing; however, its performance is lackluster, typically 40% worse than

in my tests.

is both faster (by up to 40%) and smaller (14 kB minified) than

, and it contains a variety of innovations that make it excellent for both performance and compression ratio. However, the developer made a variety of tiny mistakes and inefficient design choices that make it imperfect. Moreover, it does not support GZIP or Zlib data directly; one must remove the headers manually to use

So what makes

different? It takes the brilliant innovations of
and optimizes them while adding direct support for GZIP and Zlib data. And unlike all of the above libraries, it uses ES Modules to allow for partial builds through tree shaking, meaning that it can rival even
in size while maintaining excellent performance. The end result is a library that, in total, weighs 8kB minified for the core build (3kB for decompression only and 5kB for compression only), is about 15% faster than
or up to 60% faster than
, and achieves the same or better compression ratio than the rest.

If you're willing to have 160 kB of extra weight and much less browser support, you could theoretically achieve more performance than

with a WASM build of Zlib like
. However, per some tests I conducted, the WASM interpreters of major browsers are not fast enough as of December 2020 for
to be useful:
is around 2x faster.

Before you decide that

is the end-all compression library, you should note that JavaScript simply cannot rival the performance of a native program. If you're only using Node.js, it's probably better to use the native Zlib bindings, which tend to offer the best performance. Though note that even against Zlib,
is only around 30% slower in decompression and 10% slower in compression, and can still achieve better compression ratios!

Browser support

makes heavy use of typed arrays (
, etc.). Typed arrays can be polyfilled at the cost of performance, but the most recent browser that doesn't support them is from 2011, so I wouldn't bother.

The asynchronous APIs also use

, which is not supported in a few browsers (however, the vast majority of browsers that support typed arrays support

Other than that,

is completely ES3, meaning you probably won't even need a bundler to use it.


You can validate the performance of

. It validates that the module is working as expected, ensures the outputs are no more than 5% larger than competitors at max compression, and outputs performance metrics to

Note that the time it takes for the CLI to show the completion of each test is not representative of the time each package took, so please check the JSON output if you want accurate measurements.


This software is MIT Licensed, with special exemptions for projects and organizations as noted below:

  • SheetJS is exempt from MIT licensing and may license any source code from this software under the BSD Zero Clause License

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.