Go to file
Tessa Walsh 21ae38362e
Add endpoints to read pages from older crawl WACZs into database (#1562)
Fixes #1597

New endpoints (replacing old migration) to re-add crawl pages to db from
WACZs.

After a few implementation attempts, we settled on using
[remotezip](https://github.com/gtsystem/python-remotezip) to handle
parsing of the zip files and streaming their contents line-by-line for
pages. I've also modified the sync log streaming to use remotezip as
well, which allows us to remove our own zip module and let remotezip
handle the complexity of parsing zip files.

Database inserts for pages from WACZs are batched 100 at a time to help
speed up the endpoint, and the task is kicked off using
asyncio.create_task so as not to block before giving a response.

StorageOps now contains a method for streaming the bytes of any file in
a remote WACZ, requiring only the presigned URL for the WACZ and the
name of the file to stream.
2024-03-19 14:14:21 -07:00
.github Upgrade Node 16 > 18 (#1612) 2024-03-19 13:02:08 -07:00
.vscode Add ESLint rules for import ordering (#1608) 2024-03-18 21:50:02 -04:00
ansible Bump cryptography from 41.0.1 to 42.0.4 in /ansible (#1574) 2024-03-06 16:24:36 -08:00
assets Docs: Update docs theme (#1594) 2024-03-16 15:09:31 -07:00
backend Add endpoints to read pages from older crawl WACZs into database (#1562) 2024-03-19 14:14:21 -07:00
chart Add endpoints to read pages from older crawl WACZs into database (#1562) 2024-03-19 14:14:21 -07:00
configs
docs Update node version mentioned in docs (#1615) 2024-03-19 16:40:53 -04:00
frontend Update node version mentioned in docs (#1615) 2024-03-19 16:40:53 -04:00
scripts
test
.gitignore
.pre-commit-config.yaml
btrix
CHANGES.md
LICENSE
mkdocs.yml Docs: Update docs theme (#1594) 2024-03-16 15:09:31 -07:00
NOTICE
pylintrc
README.md Docs: Update docs theme (#1594) 2024-03-16 15:09:31 -07:00
update-version.sh
version.txt version: bump to 1.10.0-beta.0 2024-02-20 00:22:29 -08:00
yarn.lock

Browsertrix Logo

 

Browsertrix is an open-source cloud-native high-fidelity browser-based crawling service designed to make web archiving easier and more accessible for everyone.

The service provides an API and UI for scheduling crawls and viewing results, and managing all aspects of crawling process. This system provides the orchestration and management around crawling, while the actual crawling is performed using Browsertrix Crawler containers, which are launched for each crawl.

See browsertrix.com for a feature overview and information about Browsertrix hosting.

Documentation

The full docs for using, deploying, and developing Browsertrix are available at: https://docs.browsertrix.cloud

Deployment

The latest deployment documentation is available at: https://docs.browsertrix.cloud/deploy

The docs cover deploying Browsertrix in different environments using Kubernetes, from a single-node setup to scalable clusters in the cloud.

Previously, Browsertrix also supported Docker Compose and podman-based deployment. This has been deprecated due to the complexity of maintaining feature parity across different setups, and with various Kubernetes deployment options being available and easy to deploy, even on a single machine.

Making deployment of Browsertrix as easy as possible remains a key goal, and we welcome suggestions for how we can further improve our Kubernetes deployment options.

If you are looking to just try running a single crawl, you may want to try Browsertrix Crawler first to test out the crawling capabilities.

Development Status

Browsertrix is currently in a beta, though the system and backend API is fairly stable, we are working on many additional features.

Additional developer documentation is available at https://docs.browsertrix.cloud/develop

Please see the GitHub issues and this GitHub Project for our current project plan and tasks.

License

Browsertrix is made available under the AGPLv3 License.

Documentation is made available under the Creative Commons Attribution 4.0 International License