browsertrix/backend/btrixcloud/main_scheduled_job.py
Ilya Kreymer 60ba9e366f
Refactor to use new operator on backend (#789)
* Btrixjobs Operator - Phase 1 (#679)

- add metacontroller and custom crds
- add main_op entrypoint for operator

* Btrix Operator Crawl Management (#767)

* operator backend:
- run operator api in separate container but in same pod, with WEB_CONCURRENCY=1
- operator creates statefulsets and services for CrawlJob and ProfileJob
- operator: use service hook endpoint, set port in values.yaml

* crawls working with CrawlJob
- jobs start with 'crawljob-' prefix
- update status to reflect current crawl state
- set sync time to 10 seconds by default, overridable with 'operator_resync_seconds'
- mark crawl as running, failed, complete when finished
- store finished status when crawl is complete
- support updating scale, forcing rollover, stop via patching CrawlJob
- support cancel via deletion
- requires hack to content-length for patching custom resources
- auto-delete of CrawlJob via 'ttlSecondsAfterFinished'
- also delete pvcs until autodelete supported via statefulset (k8s >1.27)
- ensure filesAdded always set correctly, keep counter in redis, add to status display
- optimization: attempt to reduce automerging, by reusing volumeClaimTemplates from existing children, as these may have additional props added
- add add_crawl_errors_to_db() for storing crawl errors from redis '<crawl>:e' key to mongodb when crawl is finished/failed/canceled
- add .status.size to display human-readable crawl size, if available (from webrecorder/browsertrix-crawler#291)
- support new page size, >0.9.0 and old page size key (changed in webrecorder/browsertrix-crawler#284)

* support for scheduled jobs!
- add main_scheduled_job entrypoint to run scheduled jobs
- add crawl_cron_job.yaml template for declaring CronJob
- CronJobs moved to default namespace

* operator manages ProfileJobs:
- jobs start with 'profilejob-'
- update expiry time by updating ProfileJob object 'expireTime' while profile is active

* refactor/cleanup:
- remove k8s package
- merge k8sman and basecrawlmanager into crawlmanager
- move templates, k8sapi, utils into root package
- delete all *_job.py files
- remove dt_now, ts_now from crawls, now in utils
- all db operations happen in crawl/crawlconfig/org files
- move shared crawl/crawlconfig/org functions that use the db to be importable directly,
including get_crawl_config, add_new_crawl, inc_crawl_stats

* role binding: more secure setup, don't allow crawler namespace any k8s permissions
- move cronjobs to be created in default namespace
- grant default namespace access to create cronjobs in default namespace
- remove role binding from crawler namespace

* additional tweaks to templates:
- templates: split crawler and redis statefulset into separate yaml file (in case need to load one or other separately)

* stats / redis optimization:
- don't update stats in mongodb on every operator sync, only when crawl is finished
- for api access, read stats directly from redis to get up-to-date stats
- move get_page_stats() to utils, add get_redis_url() to k8sapi to unify access

* Add migration for operator changes
- Update configmap for crawl configs with scale > 1 or
crawlTimeout > 0 and schedule exists to recreate CronJobs
- add option to rerun last migration, enabled via env var and by running helm with --set=rerun_last_migration=1

* subcharts: move crawljob and profilejob crds to separate subchart, as this seems best way to guarantee proper install order with + update on upgrade with helm, add built btrix-crds-0.1.0.tgz subchart
- metacontroller: use release from ghcr, add metacontroller-helm-v4.10.1.tgz subchart

* backend api fixes
- ensure changing scale of crawl also updates it in the db
- crawlconfigs: add 'currCrawlSize' and 'lastCrawlSize' to crawlconfig api

---------

Co-authored-by: D. Lee <leepro@gmail.com>
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2023-04-24 18:30:52 -07:00

62 lines
1.7 KiB
Python

""" entrypoint for cron crawl job"""
import asyncio
import os
import uuid
from .k8sapi import K8sAPI
from .db import init_db
from .crawlconfigs import get_crawl_config, inc_crawl_count
from .crawls import add_new_crawl
# ============================================================================
class ScheduledJob(K8sAPI):
"""Schedulued Job APIs for starting CrawlJobs on schedule"""
def __init__(self):
super().__init__()
self.cid = os.environ["CID"]
_, mdb = init_db()
self.crawls = mdb["crawls"]
self.crawlconfigs = mdb["crawl_configs"]
async def run(self):
"""run crawl!"""
config_map = await self.core_api.read_namespaced_config_map(
name=f"crawl-config-{self.cid}", namespace=self.namespace
)
data = config_map.data
userid = data["USER_ID"]
scale = int(data.get("INITIAL_SCALE", 0))
crawl_timeout = int(data.get("CRAWL_TIMEOUT", 0))
crawlconfig = await get_crawl_config(self.crawlconfigs, uuid.UUID(self.cid))
# k8s create
crawl_id = await self.new_crawl_job(
self.cid, userid, scale, crawl_timeout, manual=False
)
# db create
await inc_crawl_count(self.crawlconfigs, crawlconfig.id)
await add_new_crawl(
self.crawls, crawl_id, crawlconfig, uuid.UUID(userid), manual=False
)
print("Crawl Created: " + crawl_id)
# ============================================================================
def main():
"""main entrypoint"""
job = ScheduledJob()
loop = asyncio.get_event_loop()
loop.run_until_complete(job.run())
if __name__ == "__main__":
main()