browsertrix/chart/test/test.yaml
Ilya Kreymer fb3d88291f
Background Jobs Work (#1321)
Fixes #1252 

Supports a generic background job system, with two background jobs,
CreateReplicaJob and DeleteReplicaJob.
- CreateReplicaJob runs on new crawls, uploads, profiles and updates the
`replicas` array with the info about the replica after the job succeeds.
- DeleteReplicaJob deletes the replica.
- Both jobs are created from the new `replica_job.yaml` template. The
CreateReplicaJob sets secrets for primary storage + replica storage,
while DeleteReplicaJob only needs the replica storage.
- The job is processed in the operator when the job is finalized
(deleted), which should happen immediately when the job is done, either
because it succeeds or because the backoffLimit is reached (currently
set to 3).
- /jobs/ api lists all jobs using a paginated response, including filtering and sorting
- /jobs/<job id> returns details for a particular job
- tests: nightly tests updated to check create + delete replica jobs for crawls as well as uploads, job api endpoints
- tests: also fixes to timeouts in nightly tests to avoid crawls finishing too quickly.

---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2023-11-02 13:02:17 -07:00

47 lines
1.0 KiB
YAML

# test overrides
# --------------
# use local images built to :latest tag
backend_image: docker.io/webrecorder/browsertrix-backend:latest
frontend_image: docker.io/webrecorder/browsertrix-frontend:latest
crawler_image: docker.io/webrecorder/browsertrix-crawler:0.12.0
backend_pull_policy: "Never"
frontend_pull_policy: "Never"
default_crawl_filename_template: "@ts-testing-@hostsuffix.wacz"
operator_resync_seconds: 3
# for testing only
crawler_extra_cpu_per_browser: 300m
crawler_extra_memory_per_browser: 256Mi
mongo_auth:
# specify either username + password (for local mongo)
username: root
password: PASSWORD@
superuser:
# set this to enable a superuser admin
email: admin@example.com
# optional: if not set, automatically generated
# change or remove this
password: PASSW0RD!
# test max pages per crawl global limit
max_pages_per_crawl: 4
registration_enabled: "0"
# log failed crawl pods to operator backend
log_failed_crawl_lines: 200
# disable for tests
disk_utilization_threshold: 0