Go to file
Ilya Kreymer 4f676e4e82
QA Runs Initial Backend Implementation (#1586)
Supports running QA Runs via the QA API!

Builds on top of the `issue-1498-crawl-qa-backend-support` branch, fixes
#1498

Also requires the latest Browsertrix Crawler 1.1.0+ (from
webrecorder/browsertrix-crawler#469 branch)

Notable changes:
- QARun objects contain info about QA runs, which are crawls
performed on data loaded from existing crawls.

- Various crawl db operations can be performed on either the crawl or
`qa.` object, and core crawl fields have been moved to CoreCrawlable.

- While running,`QARun` data stored in a single `qa` object, while
finished qa runs are added to `qaFinished` dictionary on the Crawl. The
QA list API returns data from the finished list, sorted by most recent
first.

- Includes additional type fixes / type safety, especially around
BaseCrawl / Crawl / UploadedCrawl functionality, also creating specific
get_upload(), get_basecrawl(), get_crawl() getters for internal use and
get_crawl_out() for API

- Support filtering and sorting pages via `qaFilterBy` (screenshotMatch, textMatch) 
along with `gt`, `lt`, `gte`, `lte` params to return pages based on QA results.

---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2024-03-20 22:42:16 -07:00
.github Disable Prettier check in CI (#1619) 2024-03-20 15:01:51 -07:00
.vscode Add ESLint rules for import ordering (#1608) 2024-03-18 21:50:02 -04:00
ansible Bump cryptography from 41.0.1 to 42.0.4 in /ansible (#1574) 2024-03-06 16:24:36 -08:00
assets Docs: Update docs theme (#1594) 2024-03-16 15:09:31 -07:00
backend QA Runs Initial Backend Implementation (#1586) 2024-03-20 22:42:16 -07:00
chart QA Runs Initial Backend Implementation (#1586) 2024-03-20 22:42:16 -07:00
configs
docs Update node version mentioned in docs (#1615) 2024-03-19 16:40:53 -04:00
frontend Emit more modern code for browsers (#1614) 2024-03-19 17:22:41 -04:00
scripts publish helm chart on release (fixes #1114) (#1117) (#1123) 2023-08-30 12:02:02 -07:00
test fix(build): use /usr/bin/env bash instead of /bin/bash (#1020) 2023-07-28 21:50:04 -07:00
.gitignore
.pre-commit-config.yaml Format backend with Black 24 (#1507) 2024-02-07 11:35:34 -08:00
btrix Add setup command to btrix helper to copy local config (#1462) 2024-01-10 19:32:39 -08:00
CHANGES.md
LICENSE
mkdocs.yml Docs: Update docs theme (#1594) 2024-03-16 15:09:31 -07:00
NOTICE
pylintrc
README.md Docs: Update docs theme (#1594) 2024-03-16 15:09:31 -07:00
update-version.sh enable screenshots by default + fix py version formatting (#1518) 2024-02-07 17:07:28 -08:00
version.txt version: bump to 1.10.0-beta.0 2024-02-20 00:22:29 -08:00
yarn.lock Frontend collections beta UI (#886) 2023-06-06 17:52:01 -07:00

Browsertrix Logo

 

Browsertrix is an open-source cloud-native high-fidelity browser-based crawling service designed to make web archiving easier and more accessible for everyone.

The service provides an API and UI for scheduling crawls and viewing results, and managing all aspects of crawling process. This system provides the orchestration and management around crawling, while the actual crawling is performed using Browsertrix Crawler containers, which are launched for each crawl.

See browsertrix.com for a feature overview and information about Browsertrix hosting.

Documentation

The full docs for using, deploying, and developing Browsertrix are available at: https://docs.browsertrix.cloud

Deployment

The latest deployment documentation is available at: https://docs.browsertrix.cloud/deploy

The docs cover deploying Browsertrix in different environments using Kubernetes, from a single-node setup to scalable clusters in the cloud.

Previously, Browsertrix also supported Docker Compose and podman-based deployment. This has been deprecated due to the complexity of maintaining feature parity across different setups, and with various Kubernetes deployment options being available and easy to deploy, even on a single machine.

Making deployment of Browsertrix as easy as possible remains a key goal, and we welcome suggestions for how we can further improve our Kubernetes deployment options.

If you are looking to just try running a single crawl, you may want to try Browsertrix Crawler first to test out the crawling capabilities.

Development Status

Browsertrix is currently in a beta, though the system and backend API is fairly stable, we are working on many additional features.

Additional developer documentation is available at https://docs.browsertrix.cloud/develop

Please see the GitHub issues and this GitHub Project for our current project plan and tasks.

License

Browsertrix is made available under the AGPLv3 License.

Documentation is made available under the Creative Commons Attribution 4.0 International License