🗃 The open source self-hosted web archive. Takes browser history/bookmarks/Pocket/Pinboard/etc., sa...
The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:
"Your own personal internet archive" (网站存档 / 爬虫)
ArchiveBox is a powerful self-hosted internet archiving solution written in Python 3. You feed it URLs of pages you want to archive, and it saves them to disk in a varitety of formats depending on the configuration and the content it detects. ArchiveBox can be installed via Docker or
Once installed, URLs can be added via the command line
archivebox addor the built-in Web UI
archivebox server. It can ingest bookmarks from a service like Pocket/Pinboard, your entire browsing history, RSS feeds, or URLs one at a time.
The main index is a self-contained
data/index.sqlite3file, and each snapshot is stored as a folder
data/archive//, with an easy-to-read
index.jsonwithin. For each page, ArchiveBox auto-extracts many types of assets/media and saves them in standard formats, with out-of-the-box support for: 3 types of HTML snapshots (wget, Chrome headless, singlefile), a PDF snapshot, a screenshot, a WARC archive, git repositories, images, audio, video, subtitles, article text, and more. The snapshots are browseable and managable offline through the filesystem, the built-in webserver, or the Python API.
docker run -d -it -v ~/archivebox:/data -p 8000:8000 nikisweeting/archivebox server --init 0.0.0.0:8000 docker run -v ~/archivebox:/data -it nikisweeting/archivebox manage createsuperuser docker run -v ~/archivebox:/data -it nikisweeting/archivebox add 'https://example.com'
open http://127.0.0.1:8000/admin/login/ # then click "Add" in the navbar
ArchiveBox is a command line tool, self-hostable web-archiving server, and Python library all-in-one. It's available as a Python3 package or a Docker image, both methods provide the same CLI, Web UI, and on-disk data format.
It works on Docker, macOS, and Linux/BSD. Windows is not officially supported, but users have reported getting it working using the WSL2 + Docker.
To use ArchiveBox you start by creating a folder for your data to live in (it can be anywhere on your system), and running
archivebox initinside of it. That will create a sqlite3 index and an
ArchiveBox.conffile. After that, you can continue to add/remove/search/import/export/manage/config/etc using the CLI
archivebox help, or you can run the Web UI (recommended):
bash archivebox manage createsuperuser archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000
The CLI is considered "stable", and the ArchiveBox Python API and REST APIs are in "beta".
At the end of the day, the goal is to sleep soundly knowing that the part of the internet you care about will be automatically preserved in multiple, durable long-term formats that will be accessible for decades (or longer). You can also self-host your archivebox server on a public domain to provide archive.org-style public access to your site snapshots.
ArchiveBox supports many input formats for URLs, including Pocket & Pinboard exports, Browser bookmarks, Browser history, plain text, HTML, markdown, and more!
echo 'http://example.com' | archivebox add archivebox add 'https://example.com/some/page' archivebox add < ~/Downloads/firefox_bookmarks_export.html archivebox add < any_text_with_urls_in_it.txt archivebox add --depth=1 'https://example.com/some/downloads.html' archivebox add --depth=1 'https://news.ycombinator.com#2020-12-12'
See the Usage: CLI page for documentation and examples.
It also includes a built-in scheduled import feature and browser bookmarklet, so you can ingest URLs from RSS feeds, websites, or the filesystem regularly.
All of ArchiveBox's state (including the index, snapshot data, and config file) is stored in a single folder called the "ArchiveBox data folder". All
archivebox CLI commands must be run from inside this folder, and you first create it by running
The on-disk layout is optimized to be easy to browse by hand and durable long-term. The main index is a standard sqlite3 database (it can also be exported as static JSON/HTML), and the archive snapshots are organized by date-added timestamp in the
archive/ subfolder. Each snapshot subfolder includes a static JSON and HTML index describing its contents, and the snapshot extrator outputs are plain files within the folder (e.g.
index.jsonHTML and JSON index files containing metadata and details
titletitle of the site
favicon.icofavicon of the site
example.com/page-name.htmlwget clone of the site, with .html appended if not present
warc/<timestamp>.gzgzipped WARC of all the resources fetched while archiving
output.pdfPrinted PDF of site using headless chrome
screenshot.png1440x900 screenshot of site using headless chrome
output.htmlDOM Dump of the HTML after rendering using headless chrome
archive.org.txtA link to the saved site on archive.org
media/all audio/video files + playlists, including subtitles & metadata with youtube-dl
git/clone of any repository found on github, bitbucket, or gitlab links
It does everything out-of-the-box by default, but you can disable or tweak individual archive methods via environment variables or config file.
You don't need to install all the dependencies, ArchiveBox will automatically enable the relevant modules based on whatever you have available, but it's recommended to use the official Docker image with everything preinstalled.
ArchiveBox is written in Python 3 so it requires
pip3 available on your system. It also uses a set of optional, but highly recommended external dependencies for archiving sites:
wget (for plain HTML, static files, and WARC saving),
chromium (for screenshots, PDFs, JS execution, and more),
youtube-dl (for audio and video),
git (for cloning git repos), and
nodejs (for readability and singlefile), and more.
If you're importing URLs containing secret slugs or pages with private content (e.g Google Docs, CodiMD notepads, etc), you may want to disable some of the extractor modules to avoid leaking private URLs to 3rd party APIs during the archiving process.
# don't do this: archivebox add 'https://docs.google.com/document/d/12345somelongsecrethere' archivebox add 'https://example.com/any/url/you/want/to/keep/secret/' # without first disabling share the URL with 3rd party APIs: archivebox config --set SAVE_ARCHIVE_DOT_ORG=False # disable saving all URLs in Archive.org archivebox config --set SAVE_FAVICON=False # optional: only the domain is leaked, not full URL archivebox config --get CHROME_VERSION # optional: set this to chromium instead of chrome if you don't like Google
Be aware that malicious archived JS can also read the contents of other pages in your archive due to snapshot CSRF and XSS protections being imperfect. See the Security Overview page for more details.
# visiting an archived page with malicious JS: https://127.0.0.1:8000/archive/1602401954/example.com/index.html # example.com/index.js can now make a request to read everything: https://127.0.0.1:8000/index.html https://127.0.0.1:8000/archive/* # then example.com/index.js can send it off to some evil server
Support for saving multiple snapshots of each site over time will be added soon (along with the ability to view diffs of the changes between runs). For now ArchiveBox is designed to only archive each URL with each extractor type once. A workaround to take multiple snapshots of the same URL is to make them slightly different by adding a hash:
archivebox add 'https://example.com#2020-10-24' ... archivebox add 'https://example.com#2020-10-25'
This is the recommended way of running ArchiveBox.
It comes with everything working out of the box, including all extractors, a headless browser runtime, a full webserver, and CLI interface.
# docker-compose run archivebox <command> [args] mkdir archivebox && cd archivebox wget 'https://github.com/pirate/ArchiveBox/blob/master/docker-compose.yml' docker-compose run archivebox init docker-compose run archivebox add 'https://example.com' docker-compose run archivebox manage createsuperuser docker-compose up open http://127.0.0.1:8000
# docker run -v $PWD:/data -it nikisweeting/archivebox <command> [args] mkdir archivebox && cd archivebox docker run -v $PWD:/data -it nikisweeting/archivebox init docker run -v $PWD:/data -it nikisweeting/archivebox add 'https://example.com' docker run -v $PWD:/data -it nikisweeting/archivebox manage createsuperuser # run the webserver to access the web UI docker run -v $PWD:/data -it -p 8000:8000 nikisweeting/archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000 # or export a static version of the index if you dont want to run a server docker run -v $PWD:/data -it nikisweeting/archivebox list --html --with-headers > index.html docker run -v $PWD:/data -it nikisweeting/archivebox list --json --with-headers > index.json open ./index.html
# archivebox <command> [args]
First install the system, pip, and npm dependencies:
# Install main dependendencies using apt on Ubuntu/Debian, brew on mac, or pkg on BSD apt install python3 python3-pip python3-dev git curl wget chromium-browser youtube-dl # Install Node runtime (used for headless browser scripts like Readability, Singlefile, Mercury, etc.) curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | apt-key add - \ && echo 'deb https://deb.nodesource.com/node_14.x $(lsb_release -cs) main' >> /etc/apt/sources.list \ && apt-get update \ && apt-get install --no-install-recommends nodejs # Make a directory to hold your collection mkdir archivebox && cd archivebox # (can be anywhere, doesn't have to be called archivebox) # Install the archivebox python package in ./.venv python3 -m venv .venv && source .venv/bin/activate pip install --upgrade archivebox # Install node packages in ./node_modules (used for SingleFile, Readability, and Puppeteer) npm install --prefix . 'git+https://github.com/pirate/ArchiveBox.git'
Initialize your archive and add some links:
archivebox init archivebox add 'https://example.com' # add URLs as args pipe them in via stdin archivebox add --depth=1 https://example.com/table-of-contents.html # it can injest links from many formats, including RSS/JSON/XML/MD/TXT and more curl https://getpocket.com/users/USERNAME/feed/all | archivebox add
Start the webserver to access the web UI:
archivebox manage createsuperuser archivebox server 0.0.0.0:8000 open http://127.0.0.1:8000
Or export a static HTML version of the index if you don't want to run a webserver:
archivebox list --html --with-headers > index.html archivebox list --json --with-headers > index.json open ./index.html
To view more information about your dependencies, data, or the CLI:
archivebox version archivebox status archivebox help
Vast treasure troves of knowledge are lost every day on the internet to link rot. As a society, we have an imperative to preserve some important parts of that treasure, just like we preserve our books, paintings, and music in physical libraries long after the originals go out of print or fade into obscurity.
Whether it's to resist censorship by saving articles before they get taken down or edited, or just to save a collection of early 2010's flash games you love to play, having the tools to archive internet content enables to you save the stuff you care most about before it disappears.
The balance between the permanence and ephemeral nature of content on the internet is part of what makes it beautiful. I don't think everything should be preserved in an automated fashion, making all content permanent and never removable, but I do think people should be able to decide for themselves and effectively archive specific content that they care about.
Because modern websites are complicated and often rely on dynamic content, ArchiveBox archives the sites in several different formats beyond what public archiving services like Archive.org and Archive.is are capable of saving. Using multiple methods and the market-dominant browser to execute JS ensures we can save even the most complex, finicky websites in at least a few high-quality, long-term data formats.
All the archived links are stored by date bookmarked in
./archive/, and everything is indexed nicely with JSON & HTML files. The intent is for all the content to be viewable with common software in 50 - 100 years without needing to run ArchiveBox in a VM.
▶ Check out our community page for an index of web archiving initiatives and projects.
The aim of ArchiveBox is to go beyond what the Wayback Machine and other public archiving services can do, by adding a headless browser to replay sessions accurately, and by automatically extracting all the content in multiple redundant formats that will survive being passed down to historians and archivists through many generations.
ArchiveBox differentiates itself from similar projects by being a simple, one-shot CLI interface for users to ingest bulk feeds of URLs over extended periods, as opposed to being a backend service that ingests individual, manually-submitted URLs from a web UI. However, we also have the option to add urls via a web interface through our Django frontend.
Unlike crawler software that starts from a seed URL and works outwards, or public tools like Archive.org designed for users to manually submit links from the public internet, ArchiveBox tries to be a set-and-forget archiver suitable for archiving your entire browsing history, RSS feeds, or bookmarks, ~~including private/authenticated content that you wouldn't otherwise share with a centralized service~~ (do not do this until v0.5 is released with some security fixes). Also by having each user store their own content locally, we can save much larger portions of everyone's browsing history than a shared centralized service would be able to handle.
Because ArchiveBox is designed to ingest a firehose of browser history and bookmark feeds to a local disk, it can be much more disk-space intensive than a centralized service like the Internet Archive or Archive.today. However, as storage space gets cheaper and compression improves, you should be able to use it continuously over the years without having to delete anything. In my experience, ArchiveBox uses about 5gb per 1000 articles, but your milage may vary depending on which options you have enabled and what types of sites you're archiving. By default, it archives everything in as many formats as possible, meaning it takes more space than a using a single method, but more content is accurately replayable over extended periods of time. Storage requirements can be reduced by using a compressed/deduplicated filesystem like ZFS/BTRFS, or by setting
SAVE_MEDIA=Falseto skip audio & video files.
Whether you want to learn which organizations are the big players in the web archiving space, want to find a specific open-source tool for your web archiving need, or just want to see where archivists hang out online, our Community Wiki page serves as an index of the broader web archiving community. Check it out to learn about some of the coolest web archiving projects and communities on the web!
All contributions to ArchiveBox are welcomed! Check our issues and Roadmap for things to work on, and please open an issue to discuss your proposed implementation before working on things! Otherwise we may have to close your PR if it doesn't align with our roadmap.
First, install the system dependencies from the "Bare Metal" section above. Then you can clone the ArchiveBox repo and install ```python3 git clone https://github.com/pirate/ArchiveBox cd ArchiveBox git checkout master # or the branch you want to test git pull
python3 -m venv .venv && source .venv/bin/activate && pip install -e .[dev]
pipenv install --dev && pipenv shell
./bin/folder and read the source of the bash scripts within. You can also run all these in Docker. For more examples see the Github Actions CI/CD tests that are run:
./bin/build_docs.sh ./bin/build_pip.sh ./bin/build_docker.sh
(bumps the version, builds, and pushes a release to PyPI, Docker Hub, and Github Packages)