Use V4 ('s3v4') signature version for for all presigning URLs to support
backblaze, fixes#2472
- add 'access_addressing_style' to be able to choose virtual/path
addressing for access endpoint (default to 'virtual' as before)
- fix minio presigning with v4 by using 'path' addressing style for
minio
- if path matches '/data/' for internal minio bucket, then always use
'path'
- also make minio access path '/data/' configurable
also simplify running in any namespace with default settings:
- don't hardcode 'local-minio.default'
- in crawlers namespace, add a 'local-minio' externalName service which
maps to the main namespace service.
- add 'imagePullPolicy' field to each crawler channel declaration
- if unset, defaults to the setting in the existing
'crawler_image_pull_policy' field.
fixes#2522
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
- Removes chart values that are unused
- Also change `local-mongo.default` -> `local-mongo`,
`local-minio.default` -> `local-minio` as some users have reported
issues with `.default` and it will certainly break if not deploying
Browsertrix in the `default `namespace.
- consolidate list_pages() and list_replay_query_pages() into
list_pages()
- to keep backwards compatibility, add <crawl>/pagesSearch that does not
include page totals, keep <crawl>/pages with page total (slower)
- qa frontend: add default 'Crawl Order' sort order, to better show
pages in QA view
- bgjob: account for parallelism in bgjobs, add logging if succeeded
mismatches parallelism
- QA sorting: default to 'crawl order' by default to get better results.
- Optimize pages job: also cover crawls that may not have any pages but have pages listed in done stats
- Bgjobs: give custom op jobs more memory
Fixes#2406
Converts migration 0042 to launch a background job (parallelized across
several pods) to migrate all crawls by optimizing their pages and
setting `version: 2` on the crawl when complete.
Also Optimizes MongoDB queries for better performance.
Migration Improvements:
- Add `isMigrating` and `version` fields to `BaseCrawl`
- Add new background job type to use in migration with accompanying
`migration_job.yaml` template that allows for parallelization
- Add new API endpoint to launch this crawl migration job, and ensure
that we have list and retry endpoints for superusers that work with
background jobs that aren't tied to a specific org
- Rework background job models and methods now that not all background
jobs are tied to a single org
- Ensure new crawls and uploads have `version` set to `2`
- Modify crawl and collection replay.json endpoints to only include
fields for replay optimization (`initialPages`, `pageQueryUrl`,
`preloadResources`) if all relevant crawls/uploads have `version` set to
`2`
- Remove `distinct` calls from migration pathways
- Consolidate collection recompute stats
Query Optimizations:
- Remove all uses of $group and $facet
- Optimize /replay.json endpoints to precompute preload_resources, avoid
fetching crawl list twice
- Optimize /collections endpoint by not fetching resources
- Rename /urls -> /pageUrlCounts and avoid $group, instead sort with
index, either by seed + ts or by url to get top matches.
- Use $gte instead of $regex to get prefix matches on URL
- Use $text instead of $regex to get text search on title
- Remove total from /pages and /pageUrlCounts queries by not using
$facet
- frontend: only call /pageUrlCounts when dialog is opened.
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Emma Segal-Grossman <hi@emma.cafe>
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
Fixes#2259
This PR brings backend and frontend support for the new autoclick
behavior in Browsertrix, introduces in Browsertrix 1.5.0+
On the backend, we introduce `min_autoclick_crawler_image` to
`values.yaml`, with a default value of
`"docker.io/webrecorder/browsertrix-crawler:1.5.0"`. If this is set and
the crawler version for a new crawl is less than this value, the
autoclick behavior is removed from the behaviors list in the configmap
created for the crawl.
The one caveat for this is that a crawler image tag like "latest" will
always be parsed as greater than `min_autoclick_crawler_image`, so there
is the potential for the crawler to run into issues if using a
non-numeric image tag with an older version of the crawler. For
production we use hardcoded specific versions of the crawler except for
the dev channel, which from here on out will including autoclick
support, so I think this should be okay (and is also true of the
existing implementation for checking `min_qa_crawler_image`).
On the frontend, I've added a checkbox (unchecked by default) in the
"Limits" section just below the current checkbox for autoscroll. We
might want to move these to a different section eventually - I'm not
sure Limits is the right place for them - but I wanted to be consistent
with things as they are.
---------
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
- By default, use only `ingressClassName` for ingress class name and
corresponding field in cert-manager
- Only use old 'kubernetes.io/ingress.class' if
ingress.useOldClassAnnotation is set
- Allow for using old annotation only for backwards compatibility, eg.
for GCP
- Closes#2267 and #1570
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
- Renames `inject_analytics` to `inject_extra` and updates docs
- Manually tracks page views to enable passing custom props
- Tracks copying collection share link and downloading a public
collection
---------
Co-authored-by: emma <hi@emma.cafe>
Fixes#2170
The number of days to delay file replication deletion by is configurable
in the Helm chart with `replica_deletion_delay_days` (set by default to
7 days in `values.yaml` to encourage good practice, though we could
change this).
When `replica_deletion_delay_days` is set to an int above 0, when a
delete replica job would otherwise be started as a Kubernetes Job,
a CronJob is created instead with a cron schedule set to run yearly,
starting x days from the current moment. This cronjob is then deleted by
the operator after the job successfully completes. If a failed
background job is retried, it is re-run immediately as a Job rather
than being scheduled out into the future again.
---------
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
- By default, all locales are enabled to make it easy for local deployments to test new locales
- Adds DE, FR, PT locales to make way for translation in Weblate
Closes#2223
- [x] Adds `localesAvailable` to `/api/settings` endpoint, and uses that
list if available, rather than the full list of translated locales, to
determine which options to display to users
- [x] ~~Uses the user's browser locales, filtered to the current
language setting, for formatting numbers, dates, and durations~~
- [x] Adds & persists checkbox for "use same language for formatting
dates and numbers" in user settings
- [x] Replaces uses of `sl-format-bytes` with `localize.bytes(...)`, and
`sl-format-date` with replacement `btrix-format-date` that properly
handles fallback locales
- [x] Caches all number/duration/datetime formatters by a combined key
consisting of app language, browser language, browser setting, and
formatter options so that all formatters can be reused if needed
(previously any formatter with non-default options would be recreated
every render)
- [x] Splits out ordinal formatting from number formatter, as it didn't
make much sense in some non-English locales
- [x] Adds a little demo of date/time/duration/number formatting so you
can see what effect your language settings have
https://github.com/user-attachments/assets/724858cb-b140-4d72-a38d-83f602c71bc7
---------
Signed-off-by: emma <hi@emma.cafe>
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
Adds sending a cancellation email when a subscription is cancelled.
- The email may also include an option survey optional survey URL, if
configured in helm chart `survey_url` setting.
- Cancellation e-mail configured in `sub_cancel` e-mail template
- E-mails are sent to all org admins.
- Also adds `trialing_canceled` subscription state to differentiate from
a default `trialing` which will automatically rollover into `active`.
- The email is sent when: a new cancellation date is added for an
`active` subscription, or a `trialing` subscription is changed to to
`trialing_canceled`. (A subscription can be canceled/uncanceled several
times before actual date, and e-mail is sent every time it is canceled.)
- The 'You have X days left of your trial' is also always displayed when
state is in trialing_canceled.
Fixes#2229
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Closes#2222
Adds a runtime script that gets set to either inject the plausible
script tags, or do nothing, that runs at initialization of the frontend
container.
Fixes#2106
Docs are now hosted as part of the frontend at `/docs` by default.
- If `docs_url` is set in the helm chart, the `/docs` endpoint will
redirect to that endpoint instead
- Use multi-stage python image to build mkdocs as part of frontend, then
copy static output
- Dir layout: mkdocs.yml and docs into frontend/docs
- CI: Update docs build GH action to use new path
- Update all frontend paths to use `/docs/` instead of
`https://docs.browsertrix.com/`
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Resolves#1354
Supports crawling through pre-configured proxy servers, allowing users to select which proxy servers to use (requires browsertrix crawler 1.3+)
Config:
- proxies defined in btrix-proxies subchart
- can be configured via btrix-proxies key or separate proxies.yaml file via separate subchart
- proxies list refreshed automatically if crawler_proxies.json changes if subchart is deployed
- support for ssh and socks5 proxies
- proxy keys added to secrets in subchart
- support for default proxy to be always used if no other proxy configured, prevent starting cluster if default proxy not available
- prevent starting manual crawl if previously configured proxy is no longer available, return error
- force 'btrix' username and group name on browsertrix-crawler non-root user to support ssh
Operator:
- support crawling through proxies, pass proxyId in CrawlJob
- support running profile browsers which designated proxy, pass proxyId to ProfileJob
- prevent starting scheduled crawl if previously configured proxy is no longer available
API / Access:
- /api/orgs/all/crawlconfigs/crawler-proxies - get all proxies (superadmin only)
- /api/orgs/{oid}/crawlconfigs/crawler-proxies - get proxies available to particular org
- /api/orgs/{oid}/proxies - update allowed proxies for particular org (superadmin only)
- superadmin can configure which orgs can use which proxies, stored on the org
- superadmin can also allow an org to access all 'shared' proxies, to avoid having to allow a shared proxy on each org.
UI:
- Superadmin has 'Edit Proxies' dialog to configure for each org if it has: dedicated proxies, has access to shared proxies.
- User can select a proxy in Crawl Workflow browser settings
- Users can choose to launch a browser profile with a particular proxy
- Display which proxy is used to create profile in profile selector
- Users can choose with default proxy to use for new workflows in Crawling Defaults
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>