browsertrix/backend/btrixcloud/migrations/migration_0005_operator_scheduled_jobs.py
Tessa Walsh dc41468daf
Allow users to run crawls with 1 or 2 browser windows (#2627)
Fixes #2425 

## Changed

- Switch backend to primarily using number of browser windows rather
than scale multiplier (including migration to calculate `browserWindows`
from `scale` for existing workflows and crawls)
- Still support `scale` in addition to `browserWindows` in input models
for creating and updating workflows and re-adjusting live crawl scale
for backwards compatibility
- Adds new `max_browser_windows` value to Helm chart, but calculates the
value from `max_crawl_scale` as fallback for users with that value
already set in local charts
- Rework frontend to allow users to select multiples of
`crawler_browser_instances` or any value below
`crawler_browser_instances` for browser windows. For instance, with
`crawler_browser_instances=4` and `max_browser_windows=8`, the user
would be presented with the following options: 1, 2, 3, 4, 8
- Sets maximum width of screencast to image width returned by `message`

---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: sua yoo <sua@suayoo.com>
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
2025-06-03 13:37:30 -07:00

66 lines
2.2 KiB
Python

"""
Migration 0005 - Updating scheduled cron jobs after Operator changes
"""
from btrixcloud.models import CrawlConfig
from btrixcloud.crawlmanager import CrawlManager
from btrixcloud.migrations import BaseMigration
MIGRATION_VERSION = "0005"
class Migration(BaseMigration):
"""Migration class."""
# pylint: disable=unused-argument
def __init__(self, mdb, **kwargs):
super().__init__(mdb, migration_version=MIGRATION_VERSION)
async def migrate_up(self):
"""Perform migration up.
Find existing workflows with schedule and create new crawl_cron_jobs
from template for each, then delete existing scheduled jobs from
crawler namespace.
Additionally update the configmap for crawl configs with scale > 1
or crawlTimeout > 0.
"""
# pylint: disable=too-many-locals, duplicate-code
crawl_configs = self.mdb["crawl_configs"]
crawl_manager = CrawlManager()
# Update configmap for crawl configs that have non-zero timeout or scale > 1
match_query = {
"$or": [
{"crawlTimeout": {"$gt": 0}},
{"scale": {"$gt": 1}},
{"schedule": {"$nin": ["", None]}},
]
}
async for config_dict in crawl_configs.find(match_query):
config = CrawlConfig.from_dict(config_dict)
print(
f"Updating Crawl Config {config.id}: schedule: {config.schedule}, "
+ f"timeout: {config.crawlTimeout}"
)
try:
await crawl_manager.update_scheduled_job(config)
# pylint: disable=broad-except
except Exception as exc:
print(
"Skip crawl config migration due to error, likely missing config",
exc,
)
# Delete existing scheduled jobs from crawler namespace
print("Deleting cronjobs from crawler namespace")
await crawl_manager.batch_api.delete_collection_namespaced_cron_job(
namespace=crawl_manager.namespace
)
result = await crawl_manager.batch_api.list_namespaced_cron_job(
namespace=crawl_manager.namespace
)
assert len(result.items) == 0