* k8s: add tolerations for 'nodeType=crawling:NoSchedule' to allow scheduling crawling on designated nodes for crawler and profiles jobs and statefulsets * add affinity for 'nodeType=crawling' on crawling and profile browser statefulsets * refactor crawljob: combine crawl_updater logic into base crawl_job * increment new 'crawlAttemptCount' counter crawlconfig when crawl is started, not necessarily finished, to avoid deleting configs that had attempted but not finished crawls. * better external mongodb support: use MONGO_DB_URL to set custom url directly, otherwise build from username, password and mongo host |
||
---|---|---|
.github/workflows | ||
backend | ||
chart | ||
configs | ||
frontend | ||
test | ||
.gitignore | ||
build-backend.sh | ||
build-frontend.sh | ||
Deployment.md | ||
docker-compose.yml | ||
docker-restart.sh | ||
LICENSE | ||
NOTICE | ||
pylintrc | ||
README.md |
Browsertrix Cloud
Browsertrix Cloud is an open-source cloud-native high-fidelity browser-based crawling service designed to make web archiving easier and more accessible for everyone.
The service provides an API and UI for scheduling crawls and viewing results, and managing all aspects of crawling process. This system provides the orchestration and management around crawling, while the actual crawling is performed using Browsertrix Crawler containers, which are launched for each crawl.
The system is designed to run equally in Kubernetes and Docker.
See Features for a high-level list of planned features.
Deployment
See the Deployment page for information on how to deploy Browsertrix Cloud.
Development Status
Browsertrix Cloud is currently in pre-alpha stages and not ready for production. This is an ambitious project and there's a lot to be done!
If you would like to help in a particular way, please open an issue or reach out to us in other ways.
License
Browsertrix Cloud is made available under the AGPLv3 License.
If you would like to use it under a different license or have a question, please reach out as that may be a possibility.