browsertrix/chart/test/test.yaml
Tessa Walsh 032859f361
Support multiple crawler versions (#1420)
Fixes #1385 

## Changes
Supports multiple crawler 'channels' which can be configured to
different browsertrix-crawler versions
- Replaces `crawler_image` in helm chart with `crawler_channels` array
similar to how storages are handled
- The `default` crawler channel must always be provided and specifies
the default crawler image
- Adds backend `/orgs/{oid}/crawlconfigs/crawler-channels` API endpoint
to fetch information about available crawler versions (name, image, and
label) and test
- Adds crawler channel select to workflow creation/edit screens and
profile creation dialog, and updates related API endpoints and
configmaps accordingly. The select dropdown is shown only if more than
one channel is configured.
- Adds `crawlerChannel` to workflow and crawl details.
- Add `image` to crawler image, used to display actual image used as
part of the crawl.
- Modifies `crawler_crawl_id` backend test fixture to use `test` crawler
version to ensure crawler versions other than latest work
- Adds migration to add `crawlerChannel` set to `default` to existing
workflow and profile objects and workflow configmaps

---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
2024-01-16 15:32:12 -08:00

52 lines
1.1 KiB
YAML

# test overrides
# --------------
# use local images built to :latest tag
backend_image: docker.io/webrecorder/browsertrix-backend:latest
frontend_image: docker.io/webrecorder/browsertrix-frontend:latest
backend_pull_policy: "Never"
frontend_pull_policy: "Never"
default_crawl_filename_template: "@ts-testing-@hostsuffix.wacz"
operator_resync_seconds: 3
# for testing only
crawler_extra_cpu_per_browser: 300m
crawler_extra_memory_per_browser: 256Mi
crawler_channels:
- id: default
image: "docker.io/webrecorder/browsertrix-crawler:latest"
- id: test
image: "docker.io/webrecorder/browsertrix-crawler:latest"
mongo_auth:
# specify either username + password (for local mongo)
username: root
password: PASSWORD@
superuser:
# set this to enable a superuser admin
email: admin@example.com
# optional: if not set, automatically generated
# change or remove this
password: PASSW0RD!
# test max pages per crawl global limit
max_pages_per_crawl: 4
registration_enabled: "0"
# log failed crawl pods to operator backend
log_failed_crawl_lines: 200
# disable for tests
disk_utilization_threshold: 0