browsertrix/backend/requirements.txt
Tessa Walsh 21ae38362e
Add endpoints to read pages from older crawl WACZs into database (#1562)
Fixes #1597

New endpoints (replacing old migration) to re-add crawl pages to db from
WACZs.

After a few implementation attempts, we settled on using
[remotezip](https://github.com/gtsystem/python-remotezip) to handle
parsing of the zip files and streaming their contents line-by-line for
pages. I've also modified the sync log streaming to use remotezip as
well, which allows us to remove our own zip module and let remotezip
handle the complexity of parsing zip files.

Database inserts for pages from WACZs are batched 100 at a time to help
speed up the endpoint, and the task is kicked off using
asyncio.create_task so as not to block before giving a response.

StorageOps now contains a method for streaming the bytes of any file in
a remote WACZ, requiring only the presigned URL for the WACZ and the
name of the file to stream.
2024-03-19 14:14:21 -07:00

32 lines
578 B
Plaintext

gunicorn
uvicorn[standard]
fastapi==0.103.2
motor==3.3.1
passlib
PyJWT==2.8.0
pydantic==1.10.13
email-validator
#fastapi-users[mongodb]==9.2.2
loguru
aiofiles
kubernetes-asyncio==29.0.0
kubernetes
aiobotocore
redis>=5.0.0
pyyaml
jinja2
humanize
python-multipart
pathvalidate
#https://github.com/ikreymer/stream-zip/archive/refs/heads/stream-uncompress.zip
https://github.com/ikreymer/stream-zip/archive/refs/heads/stream-ignore-local-crc32.zip
boto3
backoff>=2.2.1
python-slugify>=8.0.1
mypy_boto3_s3
types_aiobotocore_s3
types-redis
types-python-slugify
types-pyYAML
remotezip