browsertrix/backend/btrixcloud/migrations/migration_0037_upload_pages.py
Tessa Walsh a031fab313
Backend work for public collections (#2198)
Fixes #2182 

This rather large PR adds the rest of what should be needed for public
collections work in the frontend.

New API endpoints include:

- Public collections endpoints: GET, streaming download
- Paginated list of URLs in collection with snapshot (page) info for
each
- Collection endpoint to set home URL
- Collection endpoint to upload thumbnail as stream
- DELETE endpoint to remove collection thumbnail

Changes to existing API endpoints include:

- Paginating public collection list results
- Several `pages` endpoints that previously only supported `/crawls/` in
their path, e.g. `/orgs/{oid}/crawls/all/pages/reAdd`, now support
`/uploads/` and `/all-crawls/` namespaces as well. This is necessitated
by adding pages for uploads to the database (see below). For
`/orgs/{oid}/namespace/all/pages/reAdd`, `crawls` or `uploads` will
serve as a filter to only affect crawls of that given type. Other
endpoints are more liberal at this point, and will perform the same
action regardless of the namespace used in the route (we'll likely want
to change this in a follow-up to be more consistent).
- `/orgs/{oid}/namespace/all/pages/reAdd` now kicks off a background job
rather than doing all of the computation in an asyncio task in the
backend container. The background job additionally updates collection
date ranges, page/size counts, and tags for each collection in the org
after pages have been (re)added.

Other big changes:

- New uploads will now have their pages read into the database!
Collection page counts now also include uploads
- A migration was added to start a background job for each org that will
add the pages for previously-uploaded WACZ files to the database and
update collections accordingly
- Adds a new `ImageFile` subclass of `BaseFile` for thumbnails that we
can use for other user-uploaded image files moving forward, with
separate output models for authenticated and public endpoints
2025-01-13 15:15:48 -08:00

73 lines
2.2 KiB
Python

"""
Migration 0037 -- upload pages
"""
from uuid import UUID
from btrixcloud.migrations import BaseMigration
MIGRATION_VERSION = "0037"
class Migration(BaseMigration):
"""Migration class."""
# pylint: disable=unused-argument
def __init__(self, mdb, **kwargs):
super().__init__(mdb, migration_version=MIGRATION_VERSION)
self.background_job_ops = kwargs.get("background_job_ops")
self.page_ops = kwargs.get("page_ops")
async def org_upload_pages_already_added(self, oid: UUID) -> bool:
"""Check if upload pages have already been added for this org"""
if self.page_ops is None:
print(
f"page_ops missing, assuming pages need to be added for org {oid}",
flush=True,
)
return False
mdb_crawls = self.mdb["crawls"]
async for upload in mdb_crawls.find({"oid": oid, "type": "upload"}):
upload_id = upload["_id"]
_, total = await self.page_ops.list_pages(upload_id)
if total > 0:
return True
return False
async def migrate_up(self):
"""Perform migration up.
Start background jobs to parse uploads and add their pages to db
"""
if self.background_job_ops is None:
print(
"Unable to start background job, missing background_job_ops", flush=True
)
return
mdb_orgs = self.mdb["organizations"]
async for org in mdb_orgs.find():
oid = org["_id"]
pages_already_added = await self.org_upload_pages_already_added(oid)
if pages_already_added:
print(
f"Skipping org {oid}, upload pages already added to db", flush=True
)
continue
try:
await self.background_job_ops.create_re_add_org_pages_job(
oid, crawl_type="upload"
)
# pylint: disable=broad-exception-caught
except Exception as err:
print(
f"Error starting background job to add upload pges to org {oid}: {err}",
flush=True,
)