browsertrix/backend/btrixcloud/templates/crawl_job.yaml
Ilya Kreymer 00fb8ac048
Concurrent Crawl Limit (#874)
concurrent crawl limits: (addresses #866)
- support limits on concurrent crawls that can be run within a single org
- change 'waiting' state to 'waiting_org_limit' for concurrent crawl limit and 'waiting_capacity' for capacity-based
limits

orgs:
- add 'maxConcurrentCrawl' to new 'quotas' object on orgs
- add /quotas endpoint for updating quotas object

operator:
- add all crawljobs as related, appear to be returned in creation order
- operator: if concurrent crawl limit set, ensures current job is in the first N set of crawljobs (as provided via 'related' list of crawljob objects) before it can proceed to 'starting', otherwise set to 'waiting_org_limit'
- api: add org /quotas endpoint for configuring quotas
- remove 'new' state, always start with 'starting'
- crawljob: add 'oid' to crawljob spec and label for easier querying
- more stringent state transitions: add allowed_from to set_state()
- ensure state transitions only happened from allowed states, while failed/canceled can happen from any state
- ensure finished and state synched from db if transition not allowed
- add crawl indices by oid and cid

frontend: 
- show different waiting states on frontend: 'Waiting (Crawl Limit) and 'Waiting (At Capacity)'
- add gear icon on orgs admin page
- and initial popup for setting org quotas, showing all properties from org 'quotas' object

tests:
- add concurrent crawl limit nightly tests
- fix state waiting -> waiting_capacity
- ci: add logging of operator output on test failure
2023-05-30 15:38:03 -07:00

26 lines
438 B
YAML

apiVersion: btrix.cloud/v1
kind: CrawlJob
metadata:
name: crawljob-{{ id }}
labels:
crawl: "{{ id }}"
role: "job"
oid: "{{ oid }}"
userid: "{{ userid }}"
spec:
selector:
matchLabels:
crawl: "{{ id }}"
id: "{{ id }}"
userid: "{{ userid }}"
cid: "{{ cid }}"
oid: "{{ oid }}"
scale: {{ scale }}
ttlSecondsAfterFinished: 30
{% if expire_time %}
expireTime: "{{ expire_time }}"
{% endif %}