Zstandard - Fast real-time compression algorithm
zstdas short version, is a fast lossless compression algorithm, targeting real-time compression scenarios at zlib-level and better compression ratios. It's backed by a very fast entropy stage, provided by Huff0 and FSE library.
.lz4files. Should your project require another programming language, a list of known ports and bindings is provided on Zstandard homepage.
Development branch status:
For reference, several fast compression algorithms were tested and compared on a server running Arch Linux (
Linux version 5.5.11-arch1-1), with a Core i9-9900K CPU @ 5.0GHz, using lzbench, an open-source in-memory benchmark by @inikep compiled with gcc 9.3.0, on the Silesia compression corpus.
| Compressor name | Ratio | Compression| Decompress.| | --------------- | ------| -----------| ---------- | | zstd 1.4.5 -1 | 2.884 | 500 MB/s | 1660 MB/s | | zlib 1.2.11 -1 | 2.743 | 90 MB/s | 400 MB/s | | brotli 1.0.7 -0 | 2.703 | 400 MB/s | 450 MB/s | | zstd 1.4.5 --fast=1 | 2.434 | 570 MB/s | 2200 MB/s | | zstd 1.4.5 --fast=3 | 2.312 | 640 MB/s | 2300 MB/s | | quicklz 1.5.0 -1 | 2.238 | 560 MB/s | 710 MB/s | | zstd 1.4.5 --fast=5 | 2.178 | 700 MB/s | 2420 MB/s | | lzo1x 2.10 -1 | 2.106 | 690 MB/s | 820 MB/s | | lz4 1.9.2 | 2.101 | 740 MB/s | 4530 MB/s | | zstd 1.4.5 --fast=7 | 2.096 | 750 MB/s | 2480 MB/s | | lzf 3.6 -1 | 2.077 | 410 MB/s | 860 MB/s | | snappy 1.1.8 | 2.073 | 560 MB/s | 1790 MB/s |
The negative compression levels, specified with
--fast=#, offer faster compression and decompression speed in exchange for some loss in compression ratio compared to level 1, as seen in the table above.
Zstd can also offer stronger compression ratios at the cost of compression speed. Speed vs Compression trade-off is configurable by small increments. Decompression speed is preserved and remains roughly the same at all settings, a property shared by most LZ compression algorithms, such as zlib or lzma.
The following tests were run on a server running Linux Debian (
Linux version 4.14.0-3-amd64) with a Core i7-6700K CPU @ 4.0GHz, using lzbench, an open-source in-memory benchmark by @inikep compiled with gcc 7.3.0, on the Silesia compression corpus.
Compression Speed vs Ratio
A few other algorithms can produce higher compression ratios at slower speeds, falling outside of the graph. For a larger picture including slow modes, click on this link.
Previous charts provide results applicable to typical file and stream scenarios (several MB). Small data comes with different perspectives.
The smaller the amount of data to compress, the more difficult it is to compress. This problem is common to all compression algorithms, and reason is, compression algorithms learn from past data how to compress future data. But at the beginning of a new data set, there is no "past" to build upon.
To solve this situation, Zstd offers a training mode, which can be used to tune the algorithm for a selected type of data. Training Zstandard is achieved by providing it with a few samples (one file per sample). The result of this training is stored in a file called "dictionary", which must be loaded before compression and decompression. Using this dictionary, the compression ratio achievable on small data improves dramatically.
The following example uses the
github-userssample set, created from github public API. It consists of roughly 10K records weighing about 1KB each.
|Compression Speed||Decompression Speed|
These compression gains are achieved while simultaneously providing faster compression and decompression speeds.
Training works if there is some correlation in a family of small data samples. The more data-specific a dictionary is, the more efficient it is (there is no universal dictionary). Hence, deploying one dictionary per type of data will provide the greatest benefits. Dictionary gains are mostly effective in the first few KB. Then, the compression algorithm will gradually use previously decoded content to better compress the rest of the file.
zstd --train FullPathToTrainingSet/* -o dictionaryName
zstd -D dictionaryName FILE
zstd -D dictionaryName --decompress FILE.zst
If your system is compatible with standard
makein root directory will generate
zstdcli in root directory.
Other available options include: -
make install: create and install zstd cli, library and man pages -
make check: create and run
zstd, tests its behavior on local platform
cmakeproject generator is provided within
build/cmake. It can generate Makefiles or other build scripts to create
libzstddynamic and static libraries.
CMAKE_BUILD_TYPEis set to
Note that default build type is release.
You can build and install zstd vcpkg dependency manager:
git clone https://github.com/Microsoft/vcpkg.git cd vcpkg ./bootstrap-vcpkg.sh ./vcpkg integrate install ./vcpkg install zstd
The zstd port in vcpkg is kept up to date by Microsoft team members and community contributors. If the version is out of date, please create an issue or pull request on the vcpkg repository.
builddirectory, you will find additional possibilities: - Projects for Visual Studio 2005, 2008 and 2010. + VS2010 project is compatible with VS2012, VS2013, VS2015 and VS2017. - Automated build scripts for Visual compiler by @KrzysFR, in
build/VS_scripts, which will build
libzstdlibrary without any need to open Visual Studio solution.
You can build the zstd binary via buck by executing:
buck build programs:zstdfrom the root of the repo. The output binary will be in
You can run quick local smoke tests by executing the
playTest.shscript from the
src/testsdirectory. Two env variables
$DATAGEN_BINare needed for the test script to locate the zstd and datagen binary. For information on CI testing, please refer to TESTING.md
Zstandard is currently deployed within Facebook. It is used continuously to compress large amounts of data in multiple formats and use cases. Zstandard is considered safe for production environments.
devbranch is the one where all contributions are merged before reaching
release. If you plan to propose a patch, please commit into the
devbranch, or its own feature branch. Direct commit to
releaseare not permitted. For more information, please read CONTRIBUTING.