browsertrix/backend
Tessa Walsh 21ae38362e
Add endpoints to read pages from older crawl WACZs into database (#1562)
Fixes #1597

New endpoints (replacing old migration) to re-add crawl pages to db from
WACZs.

After a few implementation attempts, we settled on using
[remotezip](https://github.com/gtsystem/python-remotezip) to handle
parsing of the zip files and streaming their contents line-by-line for
pages. I've also modified the sync log streaming to use remotezip as
well, which allows us to remove our own zip module and let remotezip
handle the complexity of parsing zip files.

Database inserts for pages from WACZs are batched 100 at a time to help
speed up the endpoint, and the task is kicked off using
asyncio.create_task so as not to block before giving a response.

StorageOps now contains a method for streaming the bytes of any file in
a remote WACZ, requiring only the presigned URL for the WACZ and the
name of the file to stream.
2024-03-19 14:14:21 -07:00
..
btrixcloud Add endpoints to read pages from older crawl WACZs into database (#1562) 2024-03-19 14:14:21 -07:00
test Add endpoints to read pages from older crawl WACZs into database (#1562) 2024-03-19 14:14:21 -07:00
test_nightly Add extra and gifted execution minutes (#1361) 2023-12-07 14:34:37 -05:00
.pylintrc
Dockerfile Backend mem usage fix - use fixed MOTOR_MAX_WORKERS + switch to gunicorn (#1468) 2024-01-16 15:32:42 -08:00
mypy.ini Support multiple crawler versions (#1420) 2024-01-16 15:32:12 -08:00
requirements.txt Add endpoints to read pages from older crawl WACZs into database (#1562) 2024-03-19 14:14:21 -07:00
test-requirements.txt Add slugs to org backend (#1250) 2023-10-10 18:30:09 -07:00