Go to file
Ilya Kreymer 544346d1d4
backend: make crawlconfigs mutable! (#656) (#662)
* backend: make crawlconfigs mutable! (#656)
- crawlconfig PATCH /{id} can now receive a new JSON config to replace the old one (in addition to scale, schedule, tags)
- exclusions: add / remove APIs mutate the current crawlconfig, do not result in a new crawlconfig created
- exclusions: ensure crawl job 'config' is updated when exclusions are added/removed, unify add/remove exclusions on crawl
- k8s: crawlconfig json is updated along with scale
- k8s: stateful set is restarted by updating annotation, instead of changing template
- crawl object: now has 'config', as well as 'profileid', 'schedule', 'crawlTimeout', 'jobType' properties to ensure anything that is changeable is stored on the crawl
- crawlconfigcore: store share properties between crawl and crawlconfig in new crawlconfigcore (includes 'schedule', 'jobType', 'config', 'profileid', 'schedule', 'crawlTimeout', 'tags', 'oid')
- crawlconfig object: remove 'oldId', 'newId', disallow deactivating/deleting while crawl is running
- rename 'userid' -> 'createdBy'
- remove unused 'completions' field
- add missing return to fix /run response
- crawlout: ensure 'profileName' is resolved on CrawlOut from profileid
- crawlout: return 'name' instead of 'configName' for consistent response
- update: 'modified', 'modifiedBy' fields to set modification date and user modifying config
- update: ensure PROFILE_FILENAME is updated in configmap is profileid provided, clear if profileid==""
- update: return 'settings_changed' and 'metadata_changed' if either crawl settings or metadata changed
- tests: update tests to check settings_changed/metadata_changed return values

add revision tracking to crawlconfig:
- store each revision separate mongo db collection
- revisions accessible via /crawlconfigs/{cid}/revs
- store 'rev' int in crawlconfig and in crawljob
- only add revision history if crawl config changed

migration:
- update to db v3
- copy fields from crawlconfig -> crawl
- rename userid -> createdBy
- copy userid -> modifiedBy, created -> modified
- skip invalid crawls (missing config), make createdBy optional (just in case)

frontend: Update crawl config keys with new API (#681), update frontend to use new PATCH endpoint, load config from crawl object in details view

---------

Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Co-authored-by: sua yoo <sua@webrecorder.org>
Co-authored-by: sua yoo <sua@suayoo.com>
2023-03-07 20:36:50 -08:00
.github/workflows chore: switch to issue node ID 2023-03-06 12:32:08 -08:00
ansible ansible: rocky firewall (#635) 2023-02-24 17:28:21 -08:00
backend backend: make crawlconfigs mutable! (#656) (#662) 2023-03-07 20:36:50 -08:00
chart ansible: rocky firewall (#635) 2023-02-24 17:28:21 -08:00
configs
docs ansible: rocky firewall (#635) 2023-02-24 17:28:21 -08:00
frontend backend: make crawlconfigs mutable! (#656) (#662) 2023-03-07 20:36:50 -08:00
scripts Fix doc to build a local image for microk8s (#594) 2023-02-14 16:10:04 -08:00
test
.gitignore add digital ocean documentation (#421) 2023-01-11 21:57:17 -08:00
.pre-commit-config.yaml Rename archives/teams -> orgs in codebase + add db migration (#486) 2023-01-18 14:51:04 -08:00
CHANGES.md version: bump to 1.3.0 2023-02-24 18:07:56 -08:00
LICENSE
mkdocs.yml Minor docs style updates (#409) 2022-12-12 18:15:12 -08:00
NOTICE
pylintrc
README.md chore(typo): fix typo in read me 2023-02-03 18:32:58 +00:00
update-version.sh
version.txt version: bump version to 1.4.0-beta.0 2023-03-06 10:20:56 -08:00

Browsertrix Cloud

Browsertrix Cloud is an open-source cloud-native high-fidelity browser-based crawling service designed to make web archiving easier and more accessible for everyone.

The service provides an API and UI for scheduling crawls and viewing results, and managing all aspects of crawling process. This system provides the orchestration and management around crawling, while the actual crawling is performed using Browsertrix Crawler containers, which are launched for each crawl.

See Features for a high-level list of planned features.

Documentation

The full docs for using, deploying and developing Browsertrix Cloud are available at: https://docs.browsertrix.cloud

Deployment

The latest deployment documentation is available at: https://docs.browsertrix.cloud/deploy

The docs cover deploying Browsertrix Cloud in different environments using Kubernetes, from a single-node setup to scalable clusters in the cloud.

Previously, Browsertrix Cloud also supported Docker Compose and podman-based deployment. This is now deprecated due to the complexity of maintaining feature parity across different setups, and with various Kubernetes deployment options being available and easy to deploy, even on a single machine.

Making deployment of Browsertrix Cloud as easy as possible remains a key goal, and we welcome suggestions for how we can further improve our Kubernetes deployment options.

If you are looking to just try running a single crawl, you may want to try Browsertrix Crawler first to test out the crawling capabilities.

Development Status

Browsertrix Cloud is currently in a beta, though the system and backend API is fairly stable, we are working on many additional features.

Additional developer documentation is available at https://docs.browsertrix.cloud/dev

Please see the GitHub issues and this GitHub Project for our current project plan and tasks.

License

Browsertrix Cloud is made available under the AGPLv3 License.

If you would like to use it under a different license or have a question, please reach out as that may be a possibility.