browsertrix/backend/btrixcloud/pagination.py
Tessa Walsh 4014d98243
Move pydantic models to separate module + refactor crawl response endpoints to be consistent (#983)
* Move all pydantic models to models.py to avoid circular dependencies
* Include automated crawl details in all-crawls GET endpoints
- ensure /all-crawls endpoint resolves names / firstSeed data same as /crawls endpoint for crawls to ensure consistent frontend display. fields added in get and list all-crawl endpoints for automated
crawls only:
- cid
- name
- description
- firstSeed
- seedCount
- profileName

* Add automated crawl fields to list all-crawls test

* Uncomment mongo readinessProbe

* cleanup CrawlOutWithResources:
- remove 'files' from output model, only resources should be returned
- add _files_to_resources() to simplify computing presigned 'resources' from raw 'files'
- update upload tests to be more consistent, 'files' never present, 'errors' always none

---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2023-07-20 13:05:33 +02:00

17 lines
422 B
Python

"""API pagination"""
from typing import Any, List, Optional
DEFAULT_PAGE_SIZE = 1_000
# ============================================================================
def paginated_format(
items: Optional[List[Any]],
total: int,
page: int = 1,
page_size: int = DEFAULT_PAGE_SIZE,
):
"""Return items in paged format."""
return {"items": items, "total": total, "page": page, "pageSize": page_size}