browsertrix/backend/btrixcloud/main_bg.py
Tessa Walsh a031fab313
Backend work for public collections (#2198)
Fixes #2182 

This rather large PR adds the rest of what should be needed for public
collections work in the frontend.

New API endpoints include:

- Public collections endpoints: GET, streaming download
- Paginated list of URLs in collection with snapshot (page) info for
each
- Collection endpoint to set home URL
- Collection endpoint to upload thumbnail as stream
- DELETE endpoint to remove collection thumbnail

Changes to existing API endpoints include:

- Paginating public collection list results
- Several `pages` endpoints that previously only supported `/crawls/` in
their path, e.g. `/orgs/{oid}/crawls/all/pages/reAdd`, now support
`/uploads/` and `/all-crawls/` namespaces as well. This is necessitated
by adding pages for uploads to the database (see below). For
`/orgs/{oid}/namespace/all/pages/reAdd`, `crawls` or `uploads` will
serve as a filter to only affect crawls of that given type. Other
endpoints are more liberal at this point, and will perform the same
action regardless of the namespace used in the route (we'll likely want
to change this in a follow-up to be more consistent).
- `/orgs/{oid}/namespace/all/pages/reAdd` now kicks off a background job
rather than doing all of the computation in an asyncio task in the
backend container. The background job additionally updates collection
date ranges, page/size counts, and tags for each collection in the org
after pages have been (re)added.

Other big changes:

- New uploads will now have their pages read into the database!
Collection page counts now also include uploads
- A migration was added to start a background job for each org that will
add the pages for previously-uploaded WACZ files to the database and
update collections accordingly
- Adds a new `ImageFile` subclass of `BaseFile` for thumbnails that we
can use for other user-uploaded image files moving forward, with
separate output models for authenticated and public endpoints
2025-01-13 15:15:48 -08:00

80 lines
2.3 KiB
Python

""" entrypoint module for background jobs """
import asyncio
import os
import sys
import traceback
from uuid import UUID
from .models import BgJobType
from .ops import init_ops
job_type = os.environ.get("BG_JOB_TYPE")
oid = os.environ.get("OID")
crawl_type = os.environ.get("CRAWL_TYPE")
# ============================================================================
# pylint: disable=too-many-function-args, duplicate-code, too-many-locals, too-many-return-statements
async def main():
"""run background job with access to ops classes"""
# pylint: disable=import-outside-toplevel
if not os.environ.get("KUBERNETES_SERVICE_HOST"):
print(
"Sorry, the Browsertrix Backend must be run inside a Kubernetes environment.\
Kubernetes not detected (KUBERNETES_SERVICE_HOST is not set), Exiting"
)
return 1
(org_ops, _, _, _, _, page_ops, coll_ops, _, _, _, _, user_manager) = init_ops()
if not oid:
print("Org id missing, quitting")
return 1
org = await org_ops.get_org_by_id(UUID(oid))
if not org:
print("Org id invalid, quitting")
return 1
# Run job
if job_type == BgJobType.DELETE_ORG:
try:
await org_ops.delete_org_and_data(org, user_manager)
return 0
# pylint: disable=broad-exception-caught
except Exception:
traceback.print_exc()
return 1
if job_type == BgJobType.RECALCULATE_ORG_STATS:
try:
await org_ops.recalculate_storage(org)
return 0
# pylint: disable=broad-exception-caught
except Exception:
traceback.print_exc()
return 1
if job_type == BgJobType.READD_ORG_PAGES:
try:
await page_ops.re_add_all_crawl_pages(org, crawl_type=crawl_type)
await coll_ops.recalculate_org_collection_dates(org)
await coll_ops.recalculate_org_collection_counts_tags(org)
return 0
# pylint: disable=broad-exception-caught
except Exception:
traceback.print_exc()
return 1
print(f"Provided job type {job_type} not currently supported")
return 1
# # ============================================================================
if __name__ == "__main__":
return_code = asyncio.run(main())
sys.exit(return_code)