Go to file
Ilya Kreymer 88a9f3baf7
ensure running crawl configmap is updated when exclusions are added/removed (#2409)
exclusions are already updated dynamically if crawler pod is running,
but when crawler pod is restarted, this ensures new exclusions are also
picked up:
- mount configmap in separate path, avoiding subPath, to allow dynamic
updates of mounted volume
- adds a lastConfigUpdate timestamp to CrawlJob - if lastConfigUpdate in
spec is different from current, the configmap is recreated by operator
- operator: also update image from channel avoid any issues with
updating crawler in channel
- only updates for exclusion add/remove so far, can later be expanded to
other crawler settings (see: #2355 for broader running crawl config
updates)
- fixes #2408
2025-02-19 11:42:19 -08:00
.github
.vscode chore: Add pylint to vscode extensions (#2387) 2025-02-12 19:40:27 -08:00
ansible
assets
backend ensure running crawl configmap is updated when exclusions are added/removed (#2409) 2025-02-19 11:42:19 -08:00
chart ensure running crawl configmap is updated when exclusions are added/removed (#2409) 2025-02-19 11:42:19 -08:00
configs
frontend Add superadmin instance stats card (#2404) 2025-02-18 17:29:26 -05:00
scripts
test
.gitattributes
.gitignore
.pre-commit-config.yaml
btrix
CHANGES.md
LICENSE
NOTICE
pylintrc
README.md
update-version.sh style change: remove spaces from python version docstring 2025-02-17 16:52:49 -08:00
version.txt version: bump to 1.14.0-beta.1 2025-02-17 16:48:27 -08:00

Browsertrix

 

Browsertrix is a cloud-native, high-fidelity, browser-based crawling service designed to make web archiving easier and more accessible for everyone.

The service provides an API and UI for scheduling crawls and viewing results, and managing all aspects of crawling process. This system provides the orchestration and management around crawling, while the actual crawling is performed using Browsertrix Crawler containers, which are launched for each crawl.

See webrecorder.net/browsertrix for a feature overview and information about how to sign up for Webrecorder's hosted Browsertrix service.

Documentation

The full docs for using, deploying, and developing Browsertrix are available at docs.browsertrix.com.

Our docs are created with Material for MKDocs.

Deployment

The latest deployment documentation is available at docs.browsertrix.com/deploy.

The docs cover deploying Browsertrix in different environments using Kubernetes, from a single-node setup to scalable clusters in the cloud.

Early on, Browsertrix also supported Docker Compose and podman-based deployment. This was deprecated due to the complexity of maintaining feature parity across different setups, and with various Kubernetes deployment options being available and easy to deploy, even on a single machine.

Making deployment of Browsertrix as easy as possible remains a key goal, and we welcome suggestions for how we can further improve our Kubernetes deployment options.

If you are looking to just try running a single crawl, you may want to try Browsertrix Crawler first to test out the crawling capabilities.

Contributing

Though the system and backend API is fairly stable, we are working on many additional features. Please see the GitHub issues and this GitHub Project for our current project plan and tasks.

Guides for getting started with local development are available at docs.browsertrix.com/develop.

Translation

We use Weblate to manage translation contributions.

Translation status

License

Browsertrix is made available under the AGPLv3 License.

Documentation is made available under the Creative Commons Attribution 4.0 International License