GithubHelp home page GithubHelp logo

archivebox / archivebox Goto Github PK

View Code? Open in Web Editor NEW
19.8K 172.0 1.1K 7.81 MB

🗃 Open source self-hosted web archiving. Takes URLs/browser history/bookmarks/Pocket/Pinboard/etc., saves HTML, JS, PDFs, media, and more...

Home Page: https://archivebox.io

License: MIT License

Python 50.74% HTML 42.43% Shell 4.42% Dockerfile 1.49% CSS 0.83% SCSS 0.08%
pocket wget browser-bookmarks pinboard chromium firefox backups rss web-archiving python

archivebox's Introduction

ArchiveBox
Open-source self-hosted web archiving.


▶️ Quickstart | Demo | GitHub | Documentation | Info & Motivation | Community


   



ArchiveBox is a powerful, self-hosted internet archiving solution to collect, save, and view websites offline.

Without active preservation effort, everything on the internet eventually dissapears or degrades. Archive.org does a great job as a centralized service, but saved URLs have to be public, and they can't save every type of content.

ArchiveBox is an open source tool that lets organizations & individuals archive both public & private web content while retaining control over their data. It can be used to save copies of bookmarks, preserve evidence for legal cases, backup photos from FB/Insta/Flickr or media from YT/Soundcloud/etc., save research papers, and more...

➡️ Get ArchiveBox with pip install archivebox on Linux, macOS, and Windows (WSL2), or via Docker ⭐️.

Once installed, it can be used as a CLI tool, self-hosted Web App, Python library, or one-off command.




📥 You can feed ArchiveBox URLs one at a time, or schedule regular imports from your bookmarks or history, social media feeds or RSS, link-saving services like Pocket/Pinboard, our Browser Extension, and more.
See Input Formats for a full list of supported input formats...


snapshot detail page

It saves snapshots of the URLs you feed it in several redundant formats.
It also detects any content featured inside pages & extracts it out into a folder:

  • 🌐 HTML/Any websites ➡️ original HTML+CSS+JS, singlefile HTML, screenshot PNG, PDF, WARC, title, article text, favicon, headers, ...
  • 🎥 Social Media/News ➡️ post content TXT, comments, title, author, images, ...
  • 🎬 YouTube/SoundCloud/etc. ➡️ MP3/MP4s, subtitles, metadata, thumbnail, ...
  • 💾 Github/Gitlab/etc. links ➡️ clone of GIT source code, README, images, ...
  • and more, see Output Formats below...

You can run ArchiveBox as a Docker web app to manage these snapshots, or continue accessing the same collection using the pip-installed CLI, Python API, and SQLite3 APIs. All the ways of using it are equivalent, and provide matching features like adding tags, scheduling regular crawls, viewing logs, and more...



🛠️ ArchiveBox uses standard tools like Chrome, wget, & yt-dlp, and stores data in ordinary files & folders.
(no complex proprietary formats, all data is readable without needing to run ArchiveBox)

The goal is to sleep soundly knowing the part of the internet you care about will be automatically preserved in durable, easily accessible formats for decades after it goes down.



📦  Install ArchiveBox using your preferred method: docker / pip / apt / etc. (see full Quickstart below).

  Expand for quick copy-pastable install commands...   ⤵️
# Option A: Get ArchiveBox with Docker Compose (recommended):
mkdir -p ~/archivebox/data && cd ~/archivebox
curl -fsSL 'https://docker-compose.archivebox.io' > docker-compose.yml   # edit options in this file as-needed
docker compose run archivebox init --setup
# docker compose run archivebox add 'https://example.com'
# docker compose run archivebox help
# docker compose up


# Option B: Or use it as a plain Docker container:
mkdir -p ~/archivebox/data && cd ~/archivebox/data
docker run -it -v $PWD:/data archivebox/archivebox init --setup
# docker run -it -v $PWD:/data archivebox/archivebox add 'https://example.com'
# docker run -it -v $PWD:/data archivebox/archivebox help
# docker run -it -v $PWD:/data -p 8000:8000 archivebox/archivebox


# Option C: Or install it with your preferred pkg manager (see Quickstart below for apt, brew, and more)
pip install archivebox
mkdir -p ~/archivebox/data && cd ~/archivebox/data
archivebox init --setup
# archviebox add 'https://example.com'
# archivebox help
# archivebox server 0.0.0.0:8000


# Option D: Or use the optional auto setup script to install it
curl -fsSL 'https://get.archivebox.io' | sh

Open http://localhost:8000 to see your server's Web UI ➡️



bookshelf graphic   logo   bookshelf graphic

Demo | Screenshots | Usage
. . . . . . . . . . . . . . . . . . . . . . . . . . . .

cli init screenshot cli init screenshot server snapshot admin screenshot server snapshot details page screenshot

Key Features


🤝 Professional Integration

ArchiveBox is free for everyone to self-host, but we also provide support, security review, and custom integrations to help NGOs, governments, and other organizations run ArchiveBox professionally:

  • Journalists: crawling during research, preserving cited pages, fact-checking & review
  • Lawyers: collecting & preserving evidence, detecting changes, tagging & review
  • Researchers: analyzing social media trends, getting LLM training data, crawling pipelines
  • Individuals: saving bookmarks, preserving portfolio content, legacy / memoirs archival
  • Governments: snapshoting public service sites, recordkeeping compliance

Contact us if your org wants help using ArchiveBox professionally.
We offer: setup & support, hosting, custom features, security, hashing & audit logging/chain-of-custody, etc.
ArchiveBox has 🏛️ 501(c)(3) nonprofit status and all our work supports open-source development.



grassgrass

Quickstart

🖥  Supported OSs: Linux/BSD, macOS, Windows (Docker)   👾  CPUs: amd64 (x86_64), arm64, arm7 (raspi>=3)


✳️  Easy Setup

Docker docker-compose (macOS/Linux/Windows)   👈  recommended   (click to expand)
👍 Docker Compose is recommended for the easiest install/update UX + best security + all extras out-of-the-box.

  1. Install Docker on your system (if not already installed).
  2. Download the docker-compose.yml file into a new empty directory (can be anywhere).
    mkdir -p ~/archivebox/data && cd ~/archivebox
    # Read and edit docker-compose.yml options as-needed after downloading
    curl -fsSL 'https://docker-compose.archivebox.io' > docker-compose.yml
  3. Run the initial setup to create an admin user (or set ADMIN_USER/PASS in docker-compose.yml)
    docker compose run archivebox init --setup
  4. Next steps: Start the server then login to the Web UI http://127.0.0.1:8000 ⇢ Admin.
    docker compose up
    # completely optional, CLI can always be used without running a server
    # docker compose run [-T] archivebox [subcommand] [--help]
    docker compose run archivebox add 'https://example.com'
    docker compose run archivebox help
    For more info, see Install: Docker Compose in the Wiki. ➡️

See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.

Docker docker run (macOS/Linux/Windows)
  1. Install Docker on your system (if not already installed).
  2. Create a new empty directory and initialize your collection (can be anywhere).
    mkdir -p ~/archivebox/data && cd ~/archivebox/data
    docker run -v $PWD:/data -it archivebox/archivebox init --setup
  3. Optional: Start the server then login to the Web UI http://127.0.0.1:8000 ⇢ Admin.
    docker run -v $PWD:/data -p 8000:8000 archivebox/archivebox
    # completely optional, CLI can always be used without running a server
    # docker run -v $PWD:/data -it [subcommand] [--help]
    docker run -v $PWD:/data -it archivebox/archivebox help
    For more info, see Install: Docker Compose in the Wiki. ➡️

See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.

curl sh automatic setup script bash auto-setup script (macOS/Linux)
  1. Install Docker on your system (optional, highly recommended but not required).
  2. Run the automatic setup script.
    curl -fsSL 'https://get.archivebox.io' | sh
    For more info, see Install: Bare Metal in the Wiki. ➡️

See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.
See setup.sh for the source code of the auto-install script.
See "Against curl | sh as an install method" blog post for my thoughts on the shortcomings of this install method.


🛠  Package Manager Setup

Pip pip (macOS/Linux/BSD)
  1. Install Python >= v3.10 and Node >= v18 on your system (if not already installed).
  2. Install the ArchiveBox package using pip3 (or pipx).
    pip3 install --upgrade archivebox yt-dlp playwright
    playwright install --with-deps chromium
    archivebox version
    # install any missing extras shown using apt/brew/pkg/etc. see Wiki for instructions
    #    [email protected] node curl wget git ripgrep ...
    See the Install: Bare Metal Wiki for full install instructions for each OS...
  3. Create a new empty directory and initialize your collection (can be anywhere).
    mkdir -p ~/archivebox/data && cd ~/archivebox/data   # for example
    archivebox init --setup   # instantialize a new collection
    # (--setup auto-installs and link JS dependencies: singlefile, readability, mercury, etc.)
  4. Optional: Start the server then login to the Web UI http://127.0.0.1:8000 ⇢ Admin.
    archivebox server 0.0.0.0:8000
    # completely optional, CLI can always be used without running a server
    # archivebox [subcommand] [--help]
    archivebox help

See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.

See the pip-archivebox repo for more details about this distribution.

aptitude apt (Ubuntu/Debian/etc.)
  1. Add the ArchiveBox repository to your sources.
    echo "deb http://ppa.launchpad.net/archivebox/archivebox/ubuntu focal main" | sudo tee /etc/apt/sources.list.d/archivebox.list
    sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys C258F79DCC02E369
    sudo apt update
  2. Install the ArchiveBox package using apt.
    sudo apt install archivebox
    # update to newest version with pip (sometimes apt package is outdated)
    pip install --upgrade --ignore-installed archivebox yt-dlp playwright
    playwright install --with-deps chromium    # install chromium and its system dependencies
    archivebox version                         # make sure all dependencies are installed
  3. Create a new empty directory and initialize your collection (can be anywhere).
    mkdir -p ~/archivebox/data && cd ~/archivebox/data
    archivebox init --setup
    Note: If you encounter issues or want more granular instructions, see the Install: Bare Metal Wiki.

  4. Optional: Start the server then login to the Web UI http://127.0.0.1:8000 ⇢ Admin.
    archivebox server 0.0.0.0:8000
    # completely optional, CLI can always be used without running a server
    # archivebox [subcommand] [--help]
    archivebox help

See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.
See the debian-archivebox repo for more details about this distribution.

homebrew brew (macOS only)
  1. Install Homebrew on your system (if not already installed).
  2. Install the ArchiveBox package using brew.
    brew tap archivebox/archivebox
    brew install archivebox
    # update to newest version with pip (sometimes brew package is outdated)
    pip install --upgrade --ignore-installed archivebox yt-dlp playwright
    playwright install --with-deps chromium    # install chromium and its system dependencies
    archivebox version                         # make sure all dependencies are installed
    See the Install: Bare Metal Wiki for more granular instructions for macOS... ➡️
  3. Create a new empty directory and initialize your collection (can be anywhere).
    mkdir -p ~/archivebox/data && cd ~/archivebox/data
    archivebox init --setup
  4. Optional: Start the server then login to the Web UI http://127.0.0.1:8000 ⇢ Admin.
    archivebox server 0.0.0.0:8000
    # completely optional, CLI can always be used without running a server
    # archivebox [subcommand] [--help]
    archivebox help

See below for more usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.
See the homebrew-archivebox repo for more details about this distribution.

Arch pacman / FreeBSD pkg / Nix nix (Arch/FreeBSD/NixOS/more)

Warning: These are contributed by external volunteers and may lag behind the official pip channel.

See below for usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.


🎗  Other Options

Docker docker + Electron electron Desktop App (macOS/Linux/Windows)
  1. Install Docker on your system (if not already installed).
  2. Download a binary release for your OS or build the native app from source

✨ Alpha (contributors wanted!): for more info, see the: Electron ArchiveBox repo.
Self-hosting Platforms TrueNAS / UNRAID / YunoHost / Cloudron / etc. (self-hosting solutions)

Warning: These are contributed by external volunteers and may lag behind the official pip channel.

See below for usage examples using the CLI, Web UI, or filesystem/SQL/Python to manage your archive.

paid Paid hosting solutions (cloud VPS)

For more discussion on managed and paid hosting options see here: Issue #531.


➡️  Next Steps


Usage

⚡️  CLI Usage

ArchiveBox commands can be run in a terminal directly on your host, or via Docker/Docker Compose.
(depending on how you chose to install it above)

mkdir -p ~/archivebox/data   # create a new data dir anywhere
cd ~/archivebox/data         # IMPORTANT: cd into the directory

# archivebox [subcommand] [--help]
archivebox version
archivebox help

# equivalent: docker compose run archivebox [subcommand] [--help]
docker compose run archivebox help

# equivalent: docker run -it -v $PWD:/data archivebox/archivebox [subcommand] [--help]
docker run -it -v $PWD:/data archivebox/archivebox help

ArchiveBox Subcommands

  • archivebox help/version to see the list of available subcommands / currently installed version info
  • archivebox setup/init/config/status/shell/manage to administer your collection
  • archivebox add/oneshot/schedule to pull in fresh URLs from bookmarks/history/RSS/etc.
  • archivebox list/update/remove to manage existing Snapshots in your collection

curl sh automatic setup script CLI Usage Examples: non-Docker
# make sure you have pip-installed ArchiveBox and it's available in your $PATH first  

# archivebox [subcommand] [--help]
archivebox init --setup      # safe to run init multiple times (also how you update versions)
archivebox version           # get archivebox version info + check dependencies
archivebox help              # get list of archivebox subcommands that can be run
archivebox add --depth=1 'https://news.ycombinator.com'
For more info, see our Usage: CLI Usage wiki. ➡️

Docker CLI Usage Examples: Docker Compose
# make sure you have `docker-compose.yml` from the Quickstart instructions first

# docker compose run archivebox [subcommand] [--help]
docker compose run archivebox init --setup
docker compose run archivebox version
docker compose run archivebox help
docker compose run archivebox add --depth=1 'https://news.ycombinator.com'
# to start webserver: docker compose up
For more info, see our Usage: Docker Compose CLI wiki. ➡️

Docker CLI Usage Examples: Docker
# make sure you create and cd into in a new empty directory first  

# docker run -it -v $PWD:/data archivebox/archivebox [subcommand] [--help]
docker run -v $PWD:/data -it archivebox/archivebox init --setup
docker run -v $PWD:/data -it archivebox/archivebox version
docker run -v $PWD:/data -it archivebox/archivebox help
docker run -v $PWD:/data -it archivebox/archivebox add --depth=1 'https://news.ycombinator.com'
# to start webserver: docker run -v $PWD:/data -it -p 8000:8000 archivebox/archivebox
For more info, see our Usage: Docker CLI wiki. ➡️

🗄  SQL/Python/Filesystem Usage
archivebox shell           # explore the Python library API in a REPL
sqlite3 ./index.sqlite3    # run SQL queries directly on your index
ls ./archive/*/index.html  # or inspect snapshot data directly on the filesystem
For more info, see our Python Shell, SQL API, and Disk Layout wikis. ➡️

🖥  Web UI Usage
# Start the server on bare metal (pip/apt/brew/etc):
archivebox manage createsuperuser              # create a new admin user via CLI
archivebox server 0.0.0.0:8000                 # start the server

# Or with Docker Compose:
nano docker-compose.yml                        # setup initial ADMIN_USERNAME & ADMIN_PASSWORD
docker compose up                              # start the server

# Or with a Docker container:
docker run -v $PWD:/data -it archivebox/archivebox archivebox manage createsuperuser
docker run -v $PWD:/data -it -p 8000:8000 archivebox/archivebox

Open http://localhost:8000 to see your server's Web UI ➡️

For more info, see our Usage: Web UI wiki. ➡️

Optional: Change permissions to allow non-logged-in users

archivebox config --set PUBLIC_ADD_VIEW=True   # allow guests to submit URLs 
archivebox config --set PUBLIC_SNAPSHOTS=True  # allow guests to see snapshot content
archivebox config --set PUBLIC_INDEX=True      # allow guests to see list of all snapshots
# or
docker compose run archivebox config --set ...

# restart the server to apply any config changes


Tip

Whether in Docker or not, ArchiveBox commands work the same way, and can be used to access the same data on-disk. For example, you could run the Web UI in Docker Compose, and run one-off commands with pip-installed ArchiveBox.

Expand to show comparison...
archivebox add --depth=1 'https://example.com'                     # add a URL with pip-installed archivebox on the host
docker compose run archivebox add --depth=1 'https://example.com'                       # or w/ Docker Compose
docker run -it -v $PWD:/data archivebox/archivebox add --depth=1 'https://example.com'  # or w/ Docker, all equivalent

For more info, see our Docker wiki. ➡️


grassgrass

. . . . . . . . . . . . . . . . . . . . . . . . . . . .

DEMO: https://demo.archivebox.io
Usage | Configuration | Caveats


lego

Overview

Input Formats: How to pass URLs into ArchiveBox for saving

# archivebox add --help
archivebox add 'https://example.com/some/page'
archivebox add --parser=generic_rss < ~/Downloads/some_feed.xml
archivebox add --depth=1 'https://news.ycombinator.com#2020-12-12'
echo 'http://example.com' | archivebox add
echo 'any text with <a href="https://example.com">urls</a> in it' | archivebox add

# if using Docker, add -i when piping stdin:
# echo 'https://example.com' | docker run -v $PWD:/data -i archivebox/archivebox add
# if using Docker Compose, add -T when piping stdin / stdout:
# echo 'https://example.com' | docker compose run -T archivebox add

See the Usage: CLI page for documentation and examples.

It also includes a built-in scheduled import feature with archivebox schedule and browser bookmarklet, so you can pull in URLs from RSS feeds, websites, or the filesystem regularly/on-demand.


Output Formats: What ArchiveBox saves for each URL

For each web page added, ArchiveBox creates a Snapshot folder and preserves its content as ordinary files inside the folder (e.g. HTML, PDF, PNG, JSON, etc.).

It uses all available methods out-of-the-box, but you can disable extractors and fine-tune the configuration as-needed.


Expand to see the full list of ways it saves each page...

data/archive/{Snapshot.id}/

  • Index: index.html & index.json HTML and JSON index files containing metadata and details
  • Title, Favicon, Headers Response headers, site favicon, and parsed site title
  • SingleFile: singlefile.html HTML snapshot rendered with headless Chrome using SingleFile
  • Wget Clone: example.com/page-name.html wget clone of the site with warc/TIMESTAMP.gz
  • Chrome Headless
    • PDF: output.pdf Printed PDF of site using headless chrome
    • Screenshot: screenshot.png 1440x900 screenshot of site using headless chrome
    • DOM Dump: output.html DOM Dump of the HTML after rendering using headless chrome
  • Article Text: article.html/json Article text extraction using Readability & Mercury
  • Archive.org Permalink: archive.org.txt A link to the saved site on archive.org
  • Audio & Video: media/ all audio/video files + playlists, including subtitles & metadata w/ yt-dlp
  • Source Code: git/ clone of any repository found on GitHub, Bitbucket, or GitLab links
  • More coming soon! See the Roadmap...

Configuration

ArchiveBox can be configured via environment variables, by using the archivebox config CLI, or by editing ./ArchiveBox.conf.

Expand to see examples...
archivebox config                               # view the entire config
archivebox config --get CHROME_BINARY           # view a specific value

archivebox config --set CHROME_BINARY=chromium  # persist a config using CLI
# OR
echo CHROME_BINARY=chromium >> ArchiveBox.conf  # persist a config using file
# OR
env CHROME_BINARY=chromium archivebox ...       # run with a one-off config
These methods also work the same way when run inside Docker, see the Docker Configuration wiki page for details.

The configuration is documented here: Configuration Wiki, and loaded here: archivebox/config.py.

Expand to see the most common options to tweak...
# e.g. archivebox config --set TIMEOUT=120
# or   docker compose run archivebox config --set TIMEOUT=120

TIMEOUT=120                # default: 60    add more seconds on slower networks
CHECK_SSL_VALIDITY=True    # default: False True = allow saving URLs w/ bad SSL
SAVE_ARCHIVE_DOT_ORG=False # default: True  False = disable Archive.org saving
MAX_MEDIA_SIZE=1500m       # default: 750m  raise/lower youtubedl output size

PUBLIC_INDEX=True          # default: True  whether anon users can view index
PUBLIC_SNAPSHOTS=True      # default: True  whether anon users can view pages
PUBLIC_ADD_VIEW=False      # default: False whether anon users can add new URLs

CHROME_USER_AGENT="Mozilla/5.0 ..."  # change these to get around bot blocking
WGET_USER_AGENT="Mozilla/5.0 ..."
CURL_USER_AGENT="Mozilla/5.0 ..."

Dependencies

To achieve high-fidelity archives in as many situations as possible, ArchiveBox depends on a variety of 3rd-party libraries and tools that specialize in extracting different types of content.

Under-the-hood, ArchiveBox uses Django to power its Web UI and SQlite + the filesystem to provide fast & durable metadata storage w/ determinisitc upgrades.

ArchiveBox bundles industry-standard tools like Google Chrome, wget, yt-dlp, readability, etc. internally, and its operation can be tuned, secured, and extended as-needed for many different applications.


Expand to learn more about ArchiveBox's internals & dependencies...

TIP: For better security, easier updating, and to avoid polluting your host system with extra dependencies,it is strongly recommended to use the ⭐️ official Docker image with everything pre-installed for the best experience.

These optional dependencies used for archiving sites include:

archivebox --version CLI output screenshot showing dependencies installed
  • chromium / chrome (for screenshots, PDF, DOM HTML, and headless JS scripts)
  • node & npm (for readability, mercury, and singlefile)
  • wget (for plain HTML, static files, and WARC saving)
  • curl (for fetching headers, favicon, and posting to Archive.org)
  • yt-dlp or youtube-dl (for audio, video, and subtitles)
  • git (for cloning git repos)
  • singlefile (for saving into a self-contained html file)
  • postlight/parser (for discussion threads, forums, and articles)
  • readability (for articles and long text content)
  • and more as we grow...

You don't need to install every dependency to use ArchiveBox. ArchiveBox will automatically disable extractors that rely on dependencies that aren't installed, based on what is configured and available in your $PATH.

If not using Docker, make sure to keep the dependencies up-to-date yourself and check that ArchiveBox isn't reporting any incompatibility with the versions you install.

#install python3 and archivebox with your system package manager
# apt/brew/pip/etc install ... (see Quickstart instructions above)

which -a archivebox    # see where you have installed archivebox
archivebox setup       # auto install all the extractors and extras
archivebox --version   # see info and check validity of installed dependencies

Installing directly on Windows without Docker or WSL/WSL2/Cygwin is not officially supported (I cannot respond to Windows support tickets), but some advanced users have reported getting it working.

Learn More


Archive Layout

All of ArchiveBox's state (SQLite DB, content, config, logs, etc.) is stored in a single folder per collection.


Expand to learn more about the layout of Archivebox's data on-disk...

Data folders can be created anywhere (~/archivebox/data or $PWD/data as seen in our examples), and you can create as many data folders as you want to hold different collections. All archivebox CLI commands are designed to be run from inside an ArchiveBox data folder, starting with archivebox init to initialize a new collection inside an empty directory.

mkdir -p ~/archivebox/data && cd ~/archivebox/data   # just an example, can be anywhere
archivebox init

The on-disk layout is optimized to be easy to browse by hand and durable long-term. The main index is a standard index.sqlite3 database in the root of the data folder (it can also be exported as static JSON/HTML), and the archive snapshots are organized by date-added timestamp in the data/archive/ subfolder.

data/
    index.sqlite3
    ArchiveBox.conf
    archive/
        ...
        1617687755/
            index.html
            index.json
            screenshot.png
            media/some_video.mp4
            warc/1617687755.warc.gz
            git/somerepo.git
            ...

Each snapshot subfolder data/archive/TIMESTAMP/ includes a static index.json and index.html describing its contents, and the snapshot extractor outputs are plain files within the folder.

Learn More


Static Archive Exporting

You can create one-off archives of individual URLs with archivebox oneshot, or export your index as static HTML using archivebox list (so you can view it without an ArchiveBox server).


Expand to learn how to export your ArchiveBox collection...

NOTE: These exports are not paginated, exporting many URLs or the entire archive at once may be slow. Use the filtering CLI flags on the archivebox list command to export specific Snapshots or ranges.

# do a one-off single URL archive wihout needing a data dir initialized
archivebox oneshot 'https://example.com'

# archivebox list --help
archivebox list --html --with-headers > index.html     # export to static html table
archivebox list --json --with-headers > index.json     # export to json blob
archivebox list --csv=timestamp,url,title > index.csv  # export to csv spreadsheet

# (if using Docker Compose, add the -T flag when piping)
# docker compose run -T archivebox list --html 'https://example.com' > index.json

The paths in the static exports are relative, make sure to keep them next to your ./archive folder when backing them up or viewing them.

Learn More


security graphic

Caveats

Archiving Private Content

If you're importing pages with private content or URLs containing secret tokens you don't want public (e.g Google Docs, paywalled content, unlisted videos, etc.), you may want to disable some of the extractor methods to avoid leaking that content to 3rd party APIs or the public.


Expand to learn about privacy, permissions, and user accounts...
# don't save private content to ArchiveBox, e.g.:
archivebox add 'https://docs.google.com/document/d/12345somePrivateDocument'
archivebox add 'https://vimeo.com/somePrivateVideo'

# without first disabling saving to Archive.org:
archivebox config --set SAVE_ARCHIVE_DOT_ORG=False  # disable saving all URLs in Archive.org

# restrict the main index, Snapshot content, and Add Page to authenticated users as-needed:
archivebox config --set PUBLIC_INDEX=False
archivebox config --set PUBLIC_SNAPSHOTS=False
archivebox config --set PUBLIC_ADD_VIEW=False 
archivebox manage createsuperuser

# if extra paranoid or anti-Google:
archivebox config --set SAVE_FAVICON=False          # disable favicon fetching (it calls a Google API passing the URL's domain part only)
archivebox config --set CHROME_BINARY=chromium      # ensure it's using Chromium instead of Chrome

CAUTION: Assume anyone viewing your archives will be able to see any cookies, session tokens, or private URLs passed to ArchiveBox during archiving. Make sure to secure your ArchiveBox data and don't share snapshots with others without stripping out sensitive headers and content first.

Learn More


Security Risks of Viewing Archived JS

Be aware that malicious archived JS can access the contents of other pages in your archive when viewed. Because the Web UI serves all viewed snapshots from a single domain, they share a request context and typical CSRF/CORS/XSS/CSP protections do not work to prevent cross-site request attacks. See the Security Overview page and Issue #239 for more details.


Expand to see risks and mitigations...
# visiting an archived page with malicious JS:
https://127.0.0.1:8000/archive/1602401954/example.com/index.html

# example.com/index.js can now make a request to read everything from:
https://127.0.0.1:8000/index.html
https://127.0.0.1:8000/archive/*
# then example.com/index.js can send it off to some evil server

NOTE: Only the wget & dom extractor methods execute archived JS when viewing snapshots, all other archive methods produce static output that does not execute JS on viewing.
If you are worried about these issues ^ you should disable these extractors using:
archivebox config --set SAVE_WGET=False SAVE_DOM=False.

Learn More


Working Around Sites that Block Archiving

For various reasons, many large sites (Reddit, Twitter, Cloudflare, etc.) actively block archiving or bots in general. There are a number of approaches to work around this.


Click to learn how to set up user agents, cookies, and site logins...

In the future we plan on adding support for running JS scripts during archiving to block ads, cookie popups, modals, and fix other issues. Follow here for progress: Issue #51.


Saving Multiple Snapshots of a Single URL

ArchiveBox appends a hash with the current date https://example.com#2020-10-24 to differentiate when a single URL is archived multiple times.


Click to learn how the Re-Snapshot feature works...

Because ArchiveBox uniquely identifies snapshots by URL, it must use a workaround to take multiple snapshots of the same URL (otherwise they would show up as a single Snapshot entry). It makes the URLs of repeated snapshots unique by adding a hash with the archive date at the end:

archivebox add 'https://example.com#2020-10-24'
...
archivebox add 'https://example.com#2020-10-25'

The Re-Snapshot Button button in the Admin UI is a shortcut for this hash-date multi-snapshotting workaround.

Improved support for saving multiple snapshots of a single URL without this hash-date workaround will be added eventually (along with the ability to view diffs of the changes between runs).

Learn More


Storage Requirements

Because ArchiveBox is designed to ingest a large volume of URLs with multiple copies of each URL stored by different 3rd-party tools, it can be quite disk-space intensive. There are also some special requirements when using filesystems like NFS/SMB/FUSE.


Click to learn more about ArchiveBox's filesystem and hosting requirements...
  • ArchiveBox can use anywhere from ~1gb per 1000 Snapshots, to ~50gb per 1000 Snapshots, mostly dependent on whether you're saving audio & video using SAVE_MEDIA=True and whether you lower MEDIA_MAX_SIZE=750mb.
  • Disk usage can be reduced by using a compressed/deduplicated filesystem like ZFS/BTRFS, or by turning off extractors methods you don't need. You can also deduplicate content with a tool like fdupes or rdfind.
  • Don't store large collections on older filesystems like EXT3/FAT as they may not be able to handle more than 50k directory entries in the data/archive/ folder.
  • Try to keep the data/index.sqlite3 file on local drive (not a network mount) or SSD for maximum performance, however the data/archive/ folder can be on a network mount or slower HDD.
  • If using Docker or NFS/SMB/FUSE for the data/archive/ folder, you may need to set PUID & PGID and disable root_squash on your fileshare server.

Learn More




Screenshots

brew install archivebox
archivebox version
archivebox init
archivebox add archivebox data dir
archivebox server archivebox server add archivebox server list archivebox server detail


paisley graphic

Background & Motivation

ArchiveBox aims to enable more of the internet to be saved from deterioration by empowering people to self-host their own archives. The intent is for all the web content you care about to be viewable with common software in 50 - 100 years without needing to run ArchiveBox or other specialized software to replay it.


Click to read more about why archiving is important and how to do it ethically...

Vast treasure troves of knowledge are lost every day on the internet to link rot. As a society, we have an imperative to preserve some important parts of that treasure, just like we preserve our books, paintings, and music in physical libraries long after the originals go out of print or fade into obscurity.

Whether it's to resist censorship by saving news articles before they get taken down or edited, or just to save a collection of early 2010's flash games you loved to play, having the tools to archive internet content enables to you save the stuff you care most about before it disappears.


Image from Perma.cc...

The balance between the permanence and ephemeral nature of content on the internet is part of what makes it beautiful. I don't think everything should be preserved in an automated fashion--making all content permanent and never removable, but I do think people should be able to decide for themselves and effectively archive specific content that they care about, just like libraries do. Without the work of archivists saving physical books, manuscrips, and paintings we wouldn't have any knowledge of our ancestors' history. I believe archiving the web is just as important to provide the same benefit to future generations.

ArchiveBox's stance is that duplication of other people's content is only ethical if it:

  • A. doesn't deprive the original creators of revenue and
  • B. is responsibly curated by an individual/institution.

In the U.S., libraries, researchers, and archivists are allowed to duplicate copyrighted materials under "fair use" for private study, scholarship, or research. Archive.org's preservation work is covered under this exemption, as they are as a non-profit providing public service, and they respond to unethical content/DMCA/GDPR removal requests.

As long as you A. don't try to profit off pirating copyrighted content and B. have processes in place to respond to removal requests, many countries allow you to use sofware like ArchiveBox to ethically and responsibly archive any web content you can view. That being said, ArchiveBox is not liable for how you choose to operate the software. You must research your own local laws and regulations, and get proper legal council if you plan to host a public instance (start by putting your DMCA/GDPR contact info in FOOTER_INFO and changing your instance's branding using CUSTOM_TEMPLATES_DIR).


Comparison to Other Projects

comparison

Check out our community wiki for a list of web archiving tools and orgs.

A variety of open and closed-source archiving projects exist, but few provide a nice UI and CLI to manage a large, high-fidelity collection over time.


Click to read about how we differ from other centralized archiving services and open source tools...

ArchiveBox tries to be a robust, set-and-forget archiving solution suitable for archiving RSS feeds, bookmarks, or your entire browsing history (beware, it may be too big to store), including private/authenticated content that you wouldn't otherwise share with a centralized service.

Comparison With Centralized Public Archives

Not all content is suitable to be archived in a centralized collection, whether because it's private, copyrighted, too large, or too complex. ArchiveBox hopes to fill that gap.

By having each user store their own content locally, we can save much larger portions of everyone's browsing history than a shared centralized service would be able to handle. The eventual goal is to work towards federated archiving where users can share portions of their collections with each other.

Comparison With Other Self-Hosted Archiving Options

ArchiveBox differentiates itself from similar self-hosted projects by providing both a comprehensive CLI interface for managing your archive, a Web UI that can be used either independently or together with the CLI, and a simple on-disk data format that can be used without either.

If you want better fidelity for very complex interactive pages with heavy JS/streams/API requests, check out ArchiveWeb.page and ReplayWeb.page.

If you want more bookmark categorization and note-taking features, check out Archivy, Memex, Polar, or LinkAce.

If you need more advanced recursive spider/crawling ability beyond --depth=1, check out Browsertrix, Photon, or Scrapy and pipe the outputted URLs into ArchiveBox.

For more alternatives, see our list here...

ArchiveBox is neither the highest fidelity nor the simplest tool available for self-hosted archiving, rather it's a jack-of-all-trades that tries to do most things well by default. We encourage you to try these other tools made by our friends if ArchiveBox isn't suited to your needs.


Internet Archiving Ecosystem

Our Community Wiki strives to be a comprehensive index of the web archiving industry...

Need help building a custom archiving solution?

Hire the team that built Archivebox to work on your project. (@ArchiveBoxApp)


documentation graphic

Documentation

We use the GitHub wiki system and Read the Docs (WIP) for documentation.

You can also access the docs locally by looking in the ArchiveBox/docs/ folder.

Getting Started

Advanced

Developers

More Info



development

ArchiveBox Development

All contributions to ArchiveBox are welcomed! Check our issues and Roadmap for things to work on, and please open an issue to discuss your proposed implementation before working on things! Otherwise we may have to close your PR if it doesn't align with our roadmap.

For low hanging fruit / easy first tickets, see: ArchiveBox/Issues #good first ticket #help wanted.

Python API Documentation: https://docs.archivebox.io/en/dev/archivebox.html#module-archivebox.main

Setup the dev environment

Click to expand...

1. Clone the main code repo (making sure to pull the submodules as well)

git clone --recurse-submodules https://github.com/ArchiveBox/ArchiveBox
cd ArchiveBox
git checkout dev  # or the branch you want to test
git submodule update --init --recursive
git pull --recurse-submodules

2. Option A: Install the Python, JS, and system dependencies directly on your machine

# Install ArchiveBox + python dependencies
python3 -m venv .venv && source .venv/bin/activate && pip install -e '.[dev]'
# or: pipenv install --dev && pipenv shell

# Install node dependencies
npm install
# or
archivebox setup

# Check to see if anything is missing
archivebox --version
# install any missing dependencies manually, or use the helper script:
./bin/setup.sh

2. Option B: Build the docker container and use that for development instead

# Optional: develop via docker by mounting the code dir into the container
# if you edit e.g. ./archivebox/core/models.py on the docker host, runserver
# inside the container will reload and pick up your changes
docker build . -t archivebox
docker run -it \
    -v $PWD/data:/data \
    archivebox init --setup
docker run -it -p 8000:8000 \
    -v $PWD/data:/data \
    -v $PWD/archivebox:/app/archivebox \
    archivebox server 0.0.0.0:8000 --debug --reload

# (remove the --reload flag and add the --nothreading flag when profiling with the django debug toolbar)
# When using --reload, make sure any files you create can be read by the user in the Docker container, eg with 'chmod a+rX'.

Common development tasks

See the ./bin/ folder and read the source of the bash scripts within. You can also run all these in Docker. For more examples see the GitHub Actions CI/CD tests that are run: .github/workflows/*.yaml.

Run in DEBUG mode

Click to expand...
archivebox config --set DEBUG=True
# or
archivebox server --debug ...

https://stackoverflow.com/questions/1074212/how-can-i-see-the-raw-sql-queries-django-is-running

Install and run a specific GitHub branch

Click to expand...
Use a Pre-Built Image

If you're looking for the latest dev Docker image, it's often available pre-built on Docker Hub, simply pull and use archivebox/archivebox:dev.

docker pull archivebox/archivebox:dev
docker run archivebox/archivebox:dev version
# verify the BUILD_TIME and COMMIT_HASH in the output are recent
Build Branch from Source

You can also build and run any branch yourself from source, for example to build & use dev locally:

# docker-compose.yml:
services:
    archivebox:
        image: archivebox/archivebox:dev
        build: 'https://github.com/ArchiveBox/ArchiveBox.git#dev'
        ...

# or with plain Docker:
docker build -t archivebox:dev https://github.com/ArchiveBox/ArchiveBox.git#dev
docker run -it -v $PWD:/data archivebox:dev init --setup

# or with pip:
pip install 'git+https://github.com/pirate/ArchiveBox@dev'
npm install 'git+https://github.com/ArchiveBox/ArchiveBox.git#dev'
archivebox init --setup

Run the linters / tests

Click to expand...
./bin/lint.sh
./bin/test.sh

(uses flake8, mypy, and pytest -s)

Make DB migrations, enter Django shell, other dev helper commands

Click to expand...
# generate the database migrations after changes to models.py
cd archivebox/
./manage.py makemigrations

# enter a python shell or a SQL shell
cd path/to/test/data/
archivebox shell
archivebox manage dbshell

# generate a graph of the ORM models
brew install graphviz
pip install pydot graphviz
archivebox manage graph_models -a -o orm.png
open orm.png

# list all models with field db info and methods
archivebox manage list_model_info --all --signature --db-type --field-class

# print all django settings
archivebox manage print_settings
archivebox manage print_settings --format=yaml    # pip install pyyaml

# autogenerate an admin.py from given app models
archivebox manage admin_generator core > core/admin.py

# dump db data to a script that re-populates it
archivebox manage dumpscript core > scripts/testdata.py
archivebox manage reset core
archivebox manage runscript testdata

# resetdb and clear all data!
archivebox manage reset_db

# use django-tui to interactively explore commands
pip install django-tui
# ensure django-tui is in INSTALLED_APPS: core/settings.py
archivebox manage tui

# show python and JS package dependency trees
pdm list --tree
npm ls --all
ArchiveBox ORM models relatinoship graph

Contributing a new extractor

Click to expand...



ArchiveBox extractors are external binaries or Python/Node scripts that ArchiveBox runs to archive content on a page.

Extractors take the URL of a page to archive, write their output to the filesystem data/archive/TIMESTAMP/EXTRACTOR/..., and return an ArchiveResult entry which is saved to the database (visible on the Log page in the UI).

Check out how we added archivebox/extractors/singlefile.py as an example of the process: Issue #399 + PR #403.


The process to contribute a new extractor is like this:

  1. Open an issue with your propsoed implementation (please link to the pages of any new external dependencies you plan on using)
  2. Ensure any dependencies needed are easily installable via a package managers like apt, brew, pip3, npm (Ideally, prefer to use external programs available via pip3 or npm, however we do support using any binary installable via package manager that exposes a CLI/Python API and writes output to stdout or the filesystem.)
  3. Create a new file in archivebox/extractors/EXTRACTOR.py (copy an existing extractor like singlefile.py as a template)
  4. Add config settings to enable/disable any new dependencies and the extractor as a whole, e.g. USE_DEPENDENCYNAME, SAVE_EXTRACTORNAME, EXTRACTORNAME_SOMEOTHEROPTION in archivebox/config.py
  5. Add a preview section to archivebox/templates/core/snapshot.html to view the output, and a column to archivebox/templates/core/index_row.html with an icon for your extractor
  6. Add an integration test for your extractor in tests/test_extractors.py
  7. Submit your PR for review! 🎉
  8. Once merged, please document it in these places and anywhere else you see info about other extractors:



Build the docs, pip package, and docker image

Click to expand...

(Normally CI takes care of this, but these scripts can be run to do it manually)

./bin/build.sh

# or individually:
./bin/build_docs.sh
./bin/build_pip.sh
./bin/build_deb.sh
./bin/build_brew.sh
./bin/build_docker.sh

Roll a release

Click to expand...

(Normally CI takes care of this, but these scripts can be run to do it manually)

./bin/release.sh

# or individually:
./bin/release_docs.sh
./bin/release_pip.sh
./bin/release_deb.sh
./bin/release_brew.sh
./bin/release_docker.sh

Further Reading



🏛️ Contact us for professional support 💬


       
ArchiveBox operates as a US 501(c)(3) nonprofit (sponsored by HCB), direct donations are tax-deductible.

   
ArchiveBox was started by Nick Sweeting in 2017, and has grown steadily with help from our amazing contributors.
✨ Have spare CPU/disk/bandwidth after all your 网站存档爬 and want to help the world?
Check out our Good Karma Kit...

archivebox's People

Contributors

afreydev avatar ajgon avatar apkallum avatar aspensmonster avatar bardisty avatar benmuth avatar blipranger avatar c01o avatar cdvv7788 avatar cdzombak avatar comsomisha avatar dependabot[bot] avatar f0086 avatar fa0311 avatar hannah98 avatar jdcaballerov avatar karlicoss avatar maadhattah avatar miladiir avatar mlazana avatar n0ncetonic avatar nodiscc avatar overhacked avatar pirate avatar raman325 avatar sbrl avatar strubbl avatar thedanbob avatar ttimasdf avatar turian avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

archivebox's Issues

Archive Method: Add youtube-dl archiving for audio and video links

First draft should support:

  • youtube
  • vimeo
  • soundcloud
  • bandcamp
  • hypemachine
  • spotify?

Output should be the highest quality available by default, although we could also add config options: VIDEO_QUALITY=1080p AUDIO_QUALITY=320kbps.
Output should be put into the archive folder under:

  • audio/artist - title.mp3
  • video/uploader - title.mp3

Any downloaded files should also be linked to from the link index.html.

Archive Method: Markdown output by using full-text extraction provided by a "reader mode" library

Your program is great. I think it would be even better if it also saved a readability version which only tries to keep the relevant parts (reader view, clean view). I see two advantages

  • full text searches should be more accurate and contain less irrelevant matches. I save a lot of interesting articles and usually they are surrounded by headings/links to other articles which will show up in my searches.
  • It makes reading an article later much nicer. At the moment in Chromium 63 I can't use the most popular readability extension named Mercury Reader to transform a site form my disk that I have saved with bookmark-archiver. It's the same for Firefox 58.0.2. So I have to copy the actual content into a text editor or word processor to have a distraction-free reading view.

Archive Method: Hangs on screenshot at 100%

On macOS, it created the initial index file and then proceed to download several articles, links and screencap, but got stuck on this one:

[+] [1509635498] "Verizon Wants FCC to Ban States From Protecting Your Privacy | DSLReports, ": www.dslreports.com/shownews/Verizon-Wants-FCC-to-Ban-States-From-Protecting-Your-Privacy-140625
    html/archive/1509635498 (creating)
    - Fetching: wget
    - Fetching: pdf                                                                                                                                             
    - Fetching: screenshot                                                                                                                                      
      ███████████████████████████████████ 100.0% (60/60sec)

On wget 1.19.2

Consider removing 'star' / 'starred' from repo description, docs, and template

Just a minor suggestion, as the use of the word star may confuse some users who stumble upon the project. Main reason being, Bookmark Archiver supports a wide array of different bookmark formats/services, where most of them don't use the term star for anything, or if they do it might be used differently. Pinboard is one such case, where you can star any of your bookmarks and view all your starred bookmarks in one place:

screenshot-1499250869_1652x780

The way the project reads currently, one might think it only includes "starred bookmarks". Instead of star, perhaps simply bookmark is a better choice:

- Save an archived copy of all websites you star (the actual content of the sites, not just the list of bookmarks).
+ Save an archived copy of all websites you bookmark (the actual content of the sites, not just the list of bookmarks).

As for the template, Starred could be replaced with Bookmarked or Date; both are equally clear to me.

Feature Request: Add support for sharing Snapshots with other ArchiveBox instances (to enable distributed / federated archiving)

Next big thing I'm thinking about for BA is turning it into a distributed way-back machine! Everyone's personal archives can still be kept separate, but as part of the archive process you're prompted if you want to share the pages you've archived with a federated public archive. Each archived url then gets a deterministic "federated id" which other people will be able to use to find all archived versions of a specific url.

So when I visit my personal archive and see "example.com/blog/123.html" I can click a "show all versions" button which shows an archive in 2010 by alice, one in 2013 by bob, and one in 2017 by frank. I can click on links to view each of their versions in case mine is bad or corrupted somehow.

On the search page you'll be able to search for any url (like the wayback machine), and if it's not in your personal archive it'll show results from other people's archives.

Archive Method: Fix Medium RSS feed parsing

~/D/C/bookmark-archiver py[archiver] ⎇ (ssl-check) ./archive.py "https://medium.com/feed/@thesquashsh"
[*] [2018-01-12 12:23:50] Downloading https://medium.com/feed/@thesquashsh > downloads/medium.com.txt
[X] No links found :(
[1]

Create a Package with the functionalities

Hello dear friend,

I would be extremely interested to add the features you developed to a personal project.
Do you plan on releasing a package on PyPI ?

The following could be awesome:

from archive_stream import Archiver

saver = Archiver("http://www.mygreaturl.com/article/")
saver.export_screenshot("/path/to/screenshot/filename.png")
saver.export_pdf("/path/to/pdf/filename.pdf")

That would be absolutely awesome if you could release it, and I might help you if you need anything.

For the record, I would like to integrate your solution to this platform : https://www.feedcrunch.io/@dataradar

Link Parsing: Add support for plain-text list of URLs

Currently the input file is required to be in the Netscape HTML export format. I have lists of URLs I'd like to archive, in plain-text format. Converting them to HTML should be doable, but I wish bookmark-archiver would accept plain text files, such as

./archive.py urls.txt

It would be reasonable to filter the file, retaining actual http(s)?://... URLs

Set User-Agent of Wget to Chrome or Custom Value

Some websites do not properly permit wget to crawl the entire site, they block with a 403 or empty webpages.

For this it would be useful to set the --user-agent parameter of wget to another value.

A good default would be the chrome user-agent, which would make websites show as-in-browser instead of how they treat wget's default.

Alternatively the User-Agent Lynx can be used, websites that are aware of Lynx will reduce JS and CSS contents, making it easier to archive the page.

Archive Method: Full page height screenshots

to achieve this I set the RESOLUTION to '1440,10000' which usually seems to work. But this there a more elegant, proper way to do this?

btw: your donation page sounds like you look mainly for big and recurring contributions. Are you also interested in small one-time donations ($10/20)?

links generated with incorrect folder

I have a simple bookmark export file containing 1 url

https://gist.github.com/danhimalplanet/444a7bc30b34d249d4429527c78f5792

➜ ./archive.py ~/Downloads/bookmarks_1_30_18.html 2>&1 >/dev/null
➜ echo $?
0

this generates 'html/archive/index.html' https://gist.github.com/danhimalplanet/ac212bc2cd0047807ede39ba20934330

the generated index.html has relative links like:

'html/archive/archive/1517380882/screenshot.png'

the screenshot relative link is:

'html/archive/archive/1517380882/screenshot.png'

There is an extra 'archive' in the 2 links.

I can make this work if I symlink a directory 'archive' in 'html/archive' to point 1 directory up from 'html/archive'. That seems like a hack, or I'm doing something wrong.

Index: Add a way to sort the index table by column

Currently, archived items on the index page are sorted by reverse chronological order (most recent first), most likely they are presented in the same order as the original HTML bookmarks export.

It would be nice to be able to sort the table by clicking on each column's header. Eg.

  • Sort items by reverse/chronological order
  • Sort items by title
  • Sort items by original URL

This would probably require a bit of Javascript

Key error with submit to archive.org disabled

In config.py, I've set:

SUBMIT_ARCHIVE_DOT_ORG = os.getenv('SUBMIT_ARCHIVE_DOT_ORG', 'False').lower() == 'true'

Running ./archive.py ~/Downloads/bookmarks_export.html yields the following error:

File "/home/lhagan/bookmark-archiver/index.py", line 141, in write_html_link_index
    'archive_org': link['latest']['archive_org'] or 'https://web.archive.org/save/{}'.format(link['url']),
KeyError: 'archive_org'

Am I doing this wrong or have I encountered a bug here?

Cant run archive.py due to UTF-8 encoding issues

Any ideas why?

[2017-07-04 05:45:26] Starting archive from /root/bookmark-archiver/downloads/ril_export.html export file.
Traceback (most recent call last):
  File "./archive.py", line 521, in <module>
    create_archive(export_file, service=export_type, resume=resume_from)
  File "./archive.py", line 487, in create_archive
    dump_index(links, service)
  File "./archive.py", line 398, in dump_index
    f.write(index_html.format(*template_vars))
UnicodeEncodeError: 'ascii' codec can't encode character '\U0001f4c2' in position 3530: ordinal not in range(128)

"SyntaxError: can use starred expression only as assignment target" on Debian Jessie

Hi, this is my system

wget --version && python3 --version && google-chrome --version
GNU Wget 1.16 built on linux-gnu. [...]
Python 3.4.2
Google Chrome 59.0.3071.36 beta

uname -a Linux Debian-87-jessie-64-minimal 4.9.0-0.bpo.2-amd64 #1 SMP Debian 4.9.18-1~bpo8+1 (2017-04-10) x86_64 GNU/Linux

and this is the error I get. Its probably related to the python version, however I have no idea really.

python3 ./archive.py example_ril_export.html pocket
  File "./archive.py", line 91
    *'wget --no-clobber --page-requisites --adjust-extension --convert-links --no-parent'.split(' '),
               ^
SyntaxError: can use starred expression only as assignment target

Today (8th of May) its working fine with your instructions on my Mac machine.

Archive Method: Wget: Detect and fix downloaded file encodings to utf-8

Best guess for encoding:

$ file -bi archive/1509683331/blog.wesleyac.com/posts/two-interview-questions.html
application/gzip; charset=binary

First un-gzip any gzipped files:

gunzip < archive/1509683331/blog.wesleyac.com/posts/two-interview-questions.html > archive/1509683331/blog.wesleyac.com/posts/two-interview-questions.decoded.html

Then re-detect encoding and normalize to UTF-8

iconv -f ISO-8859-1 -t UTF-8 archive/1509683331/blog.wesleyac.com/posts/two-interview-questions.html > archive/1509683331/blog.wesleyac.com/posts/two-interview-questions.decoded.html

Experimenting with Windows 10 and Cygwin

Hello,
I managed to get as far as to have Chromium getting started on Windows 10 using Cygwin. But then it seems to hang. Here is what I have done:

  1. Install Cygwin and open up a Bash.
  2. Do not run ./setup.sh since that assumes apt (which is a package manager on Debian based systems) or Homebrew (which is a macOS package manager). You must ensure all is installed manually.
  3. Make sure Cygwin's Python3 is installed, along with all dependencies (I had to manually install the request module, You can do this with Cygwin's setup.exe).
  4. Install Chromium (I also tried my pre-installed Chrome, but when it hung, I installed Chromium, since that is what @pirate supports officially). Use the portable install, to stay as close to an *ixish system as possible (no registry, etc.) May be unneeded, but...hey..! ;-)
  5. In a Bash terminal type (your path may differ, of course, this is how I have it):
$ ln -s /S/Applications/chrlauncher-win32-stable-codecs-sync/bin/chrome.exe /usr/local/bin/chromium-browser

/usr/local/bin/ must be in Bash's path, of course

That should suffice to get you as far as I got:

$ /usr/bin/python3 archive.py ./bookmark_files/test_pocket.html
[*] [2018-01-20 20:57:11] Adding 0 new links from ./bookmark_files/test_pocket.html to index
[i] [2018-01-20 20:57:11] Updating 3 links in archive index...
[*] Checking Dependencies:
/usr/local/bin/chromium-browser

Note, that this is the fourth run I had, so that may explain the "Adding 0 new links..." part. I created a very short test_pocket.html from the first few lines of my Pocket export.

I guess, it is right to assume, that @pirate does not use Windows and it's clear, that Windows is not supported, but I thought, it should be doable with Cygwin, maybe with a few modifications.

The issue at hand is, that Chromium (Chrome behaves the same) gets started, but in GUI mode. Then nothing more happens. It seems to hang somewhere. I checked, whether the Windows ports use the same command-line switches, and yes, it seems they do (I tested chrome.exe --incognito from DOS as well as from Bash (using the same symlink, archive.py uses), both successfully. I also tried adding my Chrome's user data dir to CHROME_USER_DATA_DIR as well as keeping it at None.

When I exit Chrome manually (the GUI is usable) archive.py will return with:

[X] Failed to parse Chrome version, is it installed properly?
    Run ./setup.sh, then confirm it was installed with: chromium-browser --version
    See https://github.com/pirate/bookmark-archiver for help.

Did anybody else try this on Windows?

Use of Jinja2 to render HTML content

The Current Script to archive the website URL contains both python as well as the HTML code . Thus in order to scale the project i.e to modify the HTML code to redesign the Web Page to display the archive, we may need to modify the Script also. Thus it will great to use a template rendering library like Jinja2 to render the HTML content so that the HTML and Python Part Can be separated.

Archiver tries to merge/detects conflict between two bookmarks which differ only in query string

I've been running the archiver locally against my Pinboard RSS feed to test #71, and I noticed that every time I run it, it asks if I want to cleanup the archive. If I do that, it seems to detect that two different pages (both YouTube video pages) need to be merged, and then (rightly) warns me there's a conflict.

I've uploaded the two conflicting files, along with the corresponding JSON entries from archive/index.json, at https://dropbox.dzombak.com/bookmark-archiver/ for examination.

Those JSON entries in index.json are:

{
    "timestamp": "1518721395",
    "url": "https://www.youtube.com/watch?v=Kh0Y2hVe_bw",
    "domain": "www.youtube.com",
    "base_url": "www.youtube.com/watch",
    "tags": "humor is:video birds",
    "title": "WHAT ARE BIRDS - YouTube",
    "sources": [
        "downloads/feeds.pinboard.in.txt",
        "/Users/cdzombak/tmp/pinboard/pinboard-public.rss"
    ],
    "type": "youtube"
},
// ...
{
    "timestamp": "1519258772",
    "url": "https://www.youtube.com/watch?v=KhYfe4R2Es0&app=desktop",
    "domain": "www.youtube.com",
    "base_url": "www.youtube.com/watch",
    "tags": "swiftlang extensions bestpractices api design ux is:video",
    "title": "#Pragma Conference 2017 - Soroush Khanlou - You Deserve Nice Things - YouTube",
    "sources": [
        "downloads/feeds.pinboard.in.txt",
        "/Users/cdzombak/tmp/pinboard/pinboard-public.rss"
    ],
    "type": "youtube"
},

I've seen a similar problem on a server archiving larger bookmarks collection, with pages from entirely different websites. Once my current archive job completes I'll add more examples from there.

I'm running this commit on my fork, which was branched off master in this repo a few hours ago. Locally I'm running on macOS 10.13/Python 3.6.4; on my server I have Ubuntu 16.04/Python Python 3.5.2.

I started to dig into the code, but I haven't yet been able to follow what the cleanup code is doing. I do note that these bookmarks have unique timestamps.

search.py: how to use it, description in readme?

From the issues I've found out about the recommended search.py way of searching in the archive. I Have set the archive path.

For some search terms I get this exception but for others it just works as expected.
Maybe I'm not doing something right?

./search.py "church"
Traceback (most recent call last):
File "./search.py", line 66, in
print('\n'.join(matches))
File "./search.py", line 16, in
return (l.decode().replace(archive_path, '') for l in ag.stdout.splitlines())
UnicodeDecodeError: 'utf-8' codec can't decode byte 0xcc in position 75: invalid continuation byte

If I add the --exact parameter to it, it will not yield the exception but also no results are given.

Google Chrome Canary quit unexpecedly

I get these crashes. I get the wget output, but no pdf or screenshot.

Process:               Google Chrome Canary [80446]
Path:                  /Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary
Identifier:            Google Chrome Canary
Code Type:             X86-64 (Native)
Parent Process:        Python [80409]
Responsible:           Google Chrome Canary [80446]
User ID:               501

Date/Time:             2017-05-06 10:39:14.163 +0800
OS Version:            Mac OS X 10.12.4 (16E195)

Exception Type:        EXC_CRASH (SIGABRT)

Thread 0 Crashed:: Dispatch queue: com.apple.main-thread
0   libsystem_kernel.dylib        	0x00007fff9fb92d42 __pthread_kill + 10
1   libsystem_pthread.dylib       	0x00007fff9fc805bf pthread_kill + 90
2   libsystem_c.dylib             	0x00007fff9faf8420 abort + 129
3   google-chrome                 	0x0000000106b2ec79 main + 233
4   libdyld.dylib                 	0x00007fff9fa64235 start + 1

Binary Images:
       0x106b2e000 -        0x106b2eff0 +google-chrome (0) <1F12892F-A891-3795-BFFA-C762877F17B0> /usr/local/bin/google-chrome
       0x10c58a000 -        0x10c5c7dc7  dyld (433.5) <8239D0D7-66F6-3C44-A77F-586F74525DA3> /usr/lib/dyld
    0x7fff86dd9000 -     0x7fff86dd9fff  com.apple.Accelerate (1.11 - Accelerate 1.11) <E559CE70-1A9A-3C5C-9FB7-C51FDF82F03C> /System/Library/Frameworks/Accelerate.framework/Versions/A/Accelerate
    0x7fff86df2000 -     0x7fff8730bfeb  com.apple.vImage (8.1 - ???) <3992178B-0FF2-3B05-8830-6894BE8FD6D6> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vImage.framework/Versions/A/vImage
    0x7fff8730c000 -     0x7fff8747dff3  libBLAS.dylib (1185.50.4) <4087FFE0-627E-3623-96B4-F0A9A1991E09> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib
    0x7fff8747e000 -     0x7fff87492ffb  libBNNS.dylib (15) <254698C7-7D36-3FFF-864E-ADEEEE543076> /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBNNS.dylib

New index single-source-of-truth instead of JSON: SQL Database w/ migrations

If I end up writing a backend, Django is not a bad choice because the migrations system would allow us to keep an up-to-date queryable SQLite database with all the links.

Running the archive.py script at any point would update the schema to the latest version, because it has the entire migration history from the time the index was last exported till now.

On every run, the json can be re-exported and the files updated if necessary. The SQLite database would also allow for fast searching and indexing of the links.

Files can be foreign-keyed on both uuid and their sha-512 hash to globally deduplicate and enable sharding later.

Add Support for Site Logins

I'm trying to archive a few bookmarks that require I be logged in to the site itself for content to be visible -- which means BA is just outputting a page with the content concealed at this time.

Is there a way to log the chromium instance BA is using in, to prevent that from happening? Thanks much, very impressive work.

Ubuntu 17.10: chromium version is not parsed correctly

I get the following error when running archive.py:

[*] Checking Dependencies:
/usr/bin/chromium-browser
Traceback (most recent call last):
  File "./archive.py", line 124, in <module>
    update_archive(archive_path, links, source=source, resume=resume, append=True)
  File "./archive.py", line 70, in update_archive
    archive_links(archive_path, links, source=source, resume=resume)
  File "/home/benjamin/bin/bookmark-archiver/archive_methods.py", line 40, in archive_links
    check_dependencies()
  File "/home/benjamin/bin/bookmark-archiver/util.py", line 63, in check_dependencies
    if int(version) < 59:
ValueError: invalid literal for int() with base 10: 'Using'

the reason is that

chromium --version

might give something like

Using PPAPI flash.
--ppapi-flash-path=/usr/lib/adobe-flashplugin/libpepflashplayer.so --ppapi-flash-version=
Chromium 62.0.3202.94 Built on Ubuntu , running on Ubuntu 17.10

on Ubuntu 17.10. I changed line 62 of util.py

version = result.stdout.decode('utf-8').replace('Google Chrome ', '').replace('Chromium ', '').split(' ', 1)[0].split('.', 1)[0] # TODO: regex might be better
63

to

version = result.stdout.decode('utf-8').split('Chromium',1)[1].replace('Google Chrome ', '').replace('Chromium ', '').split(' ', 1)[1].split('.', 1)[0]

but could not find an immediate solution to also cover Google Chrome

Add a way to filter the index table (by typing text)

Similar to #40,

I'd like to be able to filter the index table by typing text in a Filter... text field. Eg. when typing github in the filtering field, only display items containing github in their title (or URL?)

  • Consider actually starting the filtering after the 2nd or 3rd character has been entered (to improve performance, plus filtering on a single character does not make much sense). Alternatively, only filter after the user hits Enter or a similar Go button.
  • Consider adding a clear button to clear the current filter text.

This would probably require a bit of Javascript. I found a rather efficient filtering script https://github.com/nodiscc/awesome-selfhosted/blob/yaml/html/filtering.js which I was planning to use for another project but it might as well serve here.

Note: This is different from full text search (probably easier to achieve) and only relates to the main index.html archive index

Enable usage with other browsers

Is it possible to make this work with other browsers, namely Firefox? (For those who don't have a possibility of using chrome.) Hopefully, the software can easily be made agnostic to the specific browser choice...

Add ability to run JS scripts during archiving with Playwright/Puppeteer

https://github.com/GoogleChrome/puppeteer is fantastic for scripting actions on pages before making a screenshot or PDF.

I could add support for custom puppeteer scripts for certain urls that need a user action to be performed before archiving (e.g. logging in or closing a welcome message popup).

Puppeteer code looks like this:

        const browser = await puppeteer.launch({headless: false})
        const page = await browser.newPage()

        await page.goto('https://carbon.now.sh')

        const code_input = 'div.ReactCodeMirror div.CodeMirror-code > pre:nth-child(11)'
        await page.click(code_input)
        await page.keyboard.down('Meta')
        await page.keyboard.down('a')
        await page.keyboard.up('a')
        await page.keyboard.up('Meta')
        await page.keyboard.press('Backspace')

Config: Add a way to blacklist URLs from being archived or indexed

There are some items in my bookmarks I wish were not archived (to save bandwidth and disk space), for example items I already have archived via other means (youtube-dl), or items that are just bookmarks, with no value in offline use, etc.

Would it be possible to add an option where the user can provide a list of URLs, which will never be archived?

They would still appear in the index archive, maybe as grayed out items? With the files/pdf/screenshot/a.org links removed?

Dockerfile enhancement

I recently submitted a pull request that paved the way for a Dockerfile for this application. I am opening this enhancement to engage in discussion with you on how to proceed.

I created a repository for the Dockerfile. It is already built on Docker hub and works - however it is under my name. I'm not sure if you want to support automated builds going forward or not.

One other discussion point is that some platforms have trouble running google-chrome in a Docker container. You can read some lengthy discussion here. Running this container (or any container that utilizes Google Chrome) on an Ubuntu host will generate the "Failed to move to new namespace" error.

One solution I was playing with was to use PhantomJS instead of Google Chrome. I already have a Docker container built for it, but your application currently doesn't support PDF and PNG snapshots with PhantomJS. Another bonus to using PhantomJS is that the image is 236MB as opposed to 845MB with Google Chrome.

I am going to be playing with my fork of your application to allow the user to choose (via Env variable) to use either PhantomJS or Google Chrome - if I figure something out I will test and submit a pull request.

This should be enough material for us to chat about - let me know if you have any questions.

Support archiving URLs found in bookmark descriptions

It would be nice to be able to also archive links in bookmark description fields (<DD> HTML "tag").
For example the 1st bookmark below contains URLs in the description

<DT><A HREF="https://en.wikipedia.org/wiki/SQLite" ADD_DATE="1504028066" PRIVATE="0" TAGS="doc,admin,dev,databases">SQLite - Wikipedia</A>
<DD>http://www.service-architecture.com/articles/database/sql-92.html
https://www.sqlite.org/wal.html
https://medium.com/linode-cube/sqlite-the-universal-sql-database-engine-a26199c366fc
<DT><A HREF="https://www.youtube.com/watch?v=j5nZhf8SjXw" ADD_DATE="1503928529" PRIVATE="0" TAGS="video,wtf">Needs more JPEG. - YouTube</A>

Use case: I sometimes use a single bookmark to store relevant links/comments about a topic, instead of creating multiple bookmarks (so that all information can be found in one place). I imagine that it would be better as an optional feature/config switch.

--print-to-pdf not working in Chromium

The script works fine except the pdf are not being generated while using Chromium browser instead of Chrome. I have not tried it for Chrome yet. The --print-to-pdf is not working in the script though no exception is being generated.
bug
So as a quick fix, I used wkhtmltopdf but the pdf output rendered by it is not up to the mark. Am I wrong or the code needs a bit tweaking?

parsing a pocket json download jsondecodeerror "extra data"

I am not the best with python so I have not been able to find out what JSON line is causing it to fail.

I think line 95 may be the "tags": "ifttt,reddit", tag

Thank you,

File contains invalid JSON: {
    "updated": "1518974038.00784",
    "sources": [
        "ril_export.html"
    ],
    "timestamp": "1495763887",
    "type": null,
    "base_url": "www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/",
    "latest": {
        "wget": "www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/index.html",
        "screenshot": "screenshot.png",
        "pdf": "output.pdf",
        "archive_org": "https://web.archive.org/web/20171226180936/https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/",
        "favicon": "favicon.ico"
    },
    "url": "https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/",
    "history": {
        "wget": [
            {
                "status": "succeded",
                "output": "www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/index.html",
                "timestamp": "1514311770",
                "duration": "427",
                "cmd": [
                    "wget",
                    "--timestamping",
                    "--adjust-extension",
                    "--no-parent",
                    "--page-requisites",
                    "--convert-links",
                    "https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/"
                ]
            }
        ],
        "screenshot": [
            {
                "status": "succeded",
                "output": "screenshot.png",
                "timestamp": "1514311773",
                "duration": "2222",
                "cmd": [
                "duration": "2222",                                                                                                                                                                                                [120/1885]
                "cmd": [
                    "chromium-browser",
                    "--headless",
                    "--disable-gpu",
                    "--screenshot",
                    "--window-size=1440,900",
                    "https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/"
                ]
            }
        ],
        "pdf": [
            {
                "status": "succeded",
                "output": "output.pdf",
                "timestamp": "1514311771",
                "duration": "2213",
                "cmd": [
                    "chromium-browser",
                    "--headless",
                    "--disable-gpu",
                    "--print-to-pdf",
                    "https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/"
                ]
            }
        ],
        "archive_org": [
            {
                "status": "succeded",
                "output": "https://web.archive.org/web/20171226180936/https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/",
                "timestamp": "1514311775",
                "duration": "835",
                "cmd": [
                    "curl",
                    "-I",
                    "https://web.archive.org/save/https://www.reddit.com/r/selfhosted/comments/6d9grx/looking_for_an_rss_homepage/"
                ]
            }
        ],
        "favicon": [
            {
                "status": "succeded",
                "output": "favicon.ico",
                "timestamp": "1514311776",
                "duration": "270",
                "cmd": [
                    "curl",
                    "https://www.google.com/s2/favicons?domain=www.reddit.com"
                ]
            }
        ]
    },
    "domain": "www.reddit.com",
    "tags": "ifttt,reddit",
    "title": "Looking for an RSS Homepage : selfhosted"
}}!
Traceback (most recent call last):
  File "/home/vargwolf/bookmark-archiver/archive.py", line 124, in <module>
    update_archive(archive_path, links, source=source, resume=resume, append=True)
  File "/home/vargwolf/bookmark-archiver/archive.py", line 70, in update_archive
    archive_links(archive_path, links, source=source, resume=resume)
  File "/home/vargwolf/bookmark-archiver/archive_methods.py", line 60, in archive_links
    raise e
  File "/home/vargwolf/bookmark-archiver/archive_methods.py", line 46, in archive_links
    archive_link(link_dir, link)
  File "/home/vargwolf/bookmark-archiver/archive_methods.py", line 70, in archive_link
    **parse_json_link_index(link_dir),
  File "/home/vargwolf/bookmark-archiver/index.py", line 122, in parse_json_link_index
    return json.load(f)
  File "/usr/lib/python3.5/json/__init__.py", line 268, in load
    parse_constant=parse_constant, object_pairs_hook=object_pairs_hook, **kw)
  File "/usr/lib/python3.5/json/__init__.py", line 319, in loads
    return _default_decoder.decode(s)
  File "/usr/lib/python3.5/json/decoder.py", line 342, in decode
    raise JSONDecodeError("Extra data", s, end)
json.decoder.JSONDecodeError: Extra data: line 95 column 2 (char 3608)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.