Commit Graph

267 Commits

Author SHA1 Message Date
Ilya Kreymer
b63caf74ad
cleanup unused chart values + change mongo default (#2484)
- Removes chart values that are unused
- Also change `local-mongo.default` -> `local-mongo`,
`local-minio.default` -> `local-minio` as some users have reported
issues with `.default` and it will certainly break if not deploying
Browsertrix in the `default `namespace.
2025-03-20 08:30:45 -07:00
Ilya Kreymer
eb300815a7
Fixes #2488 (#2493)
- Fixes #2488 
- Adds a k8s api call to set `suspend=false` on Job when associated
CrawlJob is finished.
- bump version - released as 1.14.5
2025-03-19 10:06:25 -07:00
Ilya Kreymer
d8365c734f version: bump to 1.14.4 2025-03-08 15:58:18 -08:00
Ilya Kreymer
03fa00df45
set default crawler channel if not set, possible fix for #2458 (#2469)
update default RWP version
2025-03-07 12:32:19 -08:00
Tessa Walsh
13bf818914
Fix nightly tests (#2460)
Fixes #2459 

- Set `/data/` as primary storage `access_endpoint_url` in nightly test
chart
- Modify nightly test GH Actions workflow to spawn a separate job per
nightly test module using dynamic matrix
- Set configuration not to fail other jobs if one job fails
- Modify failing tests:
- Add fixture to background job nightly test module so it can run alone
- Add retry loop to crawlconfig stats nightly test so it's less
dependent on timing

GitHub limits each workflow to 256 jobs, so this should continue to be
able to scale up for us without issue.

---------

Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
2025-03-06 16:23:30 -08:00
Ilya Kreymer
9466e83d18 version: bump to 1.14.3 2025-03-03 15:20:40 -08:00
Ilya Kreymer
e13c3bfb48
move db migrations to initContainers: (#2449)
- should avoid gunicorn worker timeouts for long running migrations,
also fixes #2439
- add main_migrations as entrypoint to just run db migrations, using
existing init_ops() call
- first run 'migrations' container with same resources as 'app' and 'op'
- additional typing for initializing db
- cleanup unused code related to running only once, waiting for db to be ready
- fixes #2447
2025-03-03 13:13:15 -08:00
Ilya Kreymer
cb52da66dc version: bump to 1.14.2 2025-02-27 14:13:03 -08:00
Ilya Kreymer
376c9981dc version: bump to 1.14.1 2025-02-26 23:15:01 -08:00
Ilya Kreymer
67668438c0
ingress: only set ssl-redirect if using tls (#2432)
otherwise, http path should be accessible. Can be used when TLS
termination handled outside of ingress.
2025-02-26 23:12:07 -08:00
Ilya Kreymer
e67708bd4f version: update to 1.14.0 2025-02-24 14:49:46 -08:00
Ilya Kreymer
8a507f0473
Consolidate list page endpoints + better QA sorting + optimize pages fix (#2417)
- consolidate list_pages() and list_replay_query_pages() into
list_pages()
- to keep backwards compatibility, add <crawl>/pagesSearch that does not
include page totals, keep <crawl>/pages with page total (slower)
- qa frontend: add default 'Crawl Order' sort order, to better show
pages in QA view
- bgjob: account for parallelism in bgjobs, add logging if succeeded
mismatches parallelism
- QA sorting: default to 'crawl order' by default to get better results.
- Optimize pages job: also cover crawls that may not have any pages but have pages listed in done stats
- Bgjobs: give custom op jobs more memory
2025-02-21 13:47:20 -08:00
Ilya Kreymer
3ca68bf1d2 version: 1.14.0-beta.6 2025-02-20 15:37:33 -08:00
Tessa Walsh
f8fb2d2c8d
Rework crawl page migration + MongoDB Query Optimizations (#2412)
Fixes #2406 

Converts migration 0042 to launch a background job (parallelized across
several pods) to migrate all crawls by optimizing their pages and
setting `version: 2` on the crawl when complete.

Also Optimizes MongoDB queries for better performance.

Migration Improvements:

- Add `isMigrating` and `version` fields to `BaseCrawl`
- Add new background job type to use in migration with accompanying
`migration_job.yaml` template that allows for parallelization
- Add new API endpoint to launch this crawl migration job, and ensure
that we have list and retry endpoints for superusers that work with
background jobs that aren't tied to a specific org
- Rework background job models and methods now that not all background
jobs are tied to a single org
- Ensure new crawls and uploads have `version` set to `2`
- Modify crawl and collection replay.json endpoints to only include
fields for replay optimization (`initialPages`, `pageQueryUrl`,
`preloadResources`) if all relevant crawls/uploads have `version` set to
`2`
- Remove `distinct` calls from migration pathways
- Consolidate collection recompute stats

Query Optimizations:
- Remove all uses of $group and $facet
- Optimize /replay.json endpoints to precompute preload_resources, avoid
fetching crawl list twice
- Optimize /collections endpoint by not fetching resources 
- Rename /urls -> /pageUrlCounts and avoid $group, instead sort with
index, either by seed + ts or by url to get top matches.
- Use $gte instead of $regex to get prefix matches on URL
- Use $text instead of $regex to get text search on title
- Remove total from /pages and /pageUrlCounts queries by not using
$facet
- frontend: only call /pageUrlCounts when dialog is opened.


---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Emma Segal-Grossman <hi@emma.cafe>
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
2025-02-20 15:26:11 -08:00
Ilya Kreymer
88a9f3baf7
ensure running crawl configmap is updated when exclusions are added/removed (#2409)
exclusions are already updated dynamically if crawler pod is running,
but when crawler pod is restarted, this ensures new exclusions are also
picked up:
- mount configmap in separate path, avoiding subPath, to allow dynamic
updates of mounted volume
- adds a lastConfigUpdate timestamp to CrawlJob - if lastConfigUpdate in
spec is different from current, the configmap is recreated by operator
- operator: also update image from channel avoid any issues with
updating crawler in channel
- only updates for exclusion add/remove so far, can later be expanded to
other crawler settings (see: #2355 for broader running crawl config
updates)
- fixes #2408
2025-02-19 11:42:19 -08:00
Ilya Kreymer
a7c8ca4028 version: bump to 1.14.0-beta.1 2025-02-17 16:48:27 -08:00
Ilya Kreymer
5bebb6161a
Issue 2396 readd pages fixes (#2398)
readd pages fixes:
    - add additional mem to background job
- copy page qa data to separate temp coll when re-adding pages, then
merge back in
2025-02-17 13:52:11 -08:00
Tessa Walsh
0e9e70f3a3
Add WACZ filename, depth, favIconUrl, isSeed to pages (#2352)
Adds `filename` to pages, pointed to the WACZ file those files come
from, as well as depth, favIconUrl, and isSeed. Also adds an idempotent
migration to backfill this information for existing pages, and increases
the backend container's startupProbe time to 24 hours to give it sufficient
time to finish the migration.
---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2025-02-05 15:50:04 -05:00
Tessa Walsh
5684e896af
Add support for autoclick (#2313)
Fixes #2259 

This PR brings backend and frontend support for the new autoclick
behavior in Browsertrix, introduces in Browsertrix 1.5.0+

On the backend, we introduce `min_autoclick_crawler_image` to
`values.yaml`, with a default value of
`"docker.io/webrecorder/browsertrix-crawler:1.5.0"`. If this is set and
the crawler version for a new crawl is less than this value, the
autoclick behavior is removed from the behaviors list in the configmap
created for the crawl.

The one caveat for this is that a crawler image tag like "latest" will
always be parsed as greater than `min_autoclick_crawler_image`, so there
is the potential for the crawler to run into issues if using a
non-numeric image tag with an older version of the crawler. For
production we use hardcoded specific versions of the crawler except for
the dev channel, which from here on out will including autoclick
support, so I think this should be okay (and is also true of the
existing implementation for checking `min_qa_crawler_image`).

On the frontend, I've added a checkbox (unchecked by default) in the
"Limits" section just below the current checkbox for autoscroll. We
might want to move these to a different section eventually - I'm not
sure Limits is the right place for them - but I wanted to be consistent
with things as they are.

---------

Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
2025-01-16 12:44:00 -08:00
Dmitriy Pertsev
246bcc73c5
Use new ingressClassName only by default (#2268)
- By default, use only `ingressClassName` for ingress class name and
corresponding field in cert-manager
- Only use old 'kubernetes.io/ingress.class' if
ingress.useOldClassAnnotation is set
- Allow for using old annotation only for backwards compatibility, eg.
for GCP
- Closes #2267 and #1570

---------

Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
2025-01-15 23:23:50 -08:00
Ilya Kreymer
12f358b826
Merge pull request #2271 from webrecorder/public-collections-feature
feat: Public collections, includes:
- feat: Public org profile page #2172
- feat: Collection thumbnails, start page, and public view updates #2209
- feat: Track collection events #2256
2025-01-13 19:32:45 -08:00
Ilya Kreymer
bab5345ad5 version: bump to 1.14.0-beta.0 for public collections! 2025-01-13 19:29:54 -08:00
sua yoo
b36ed9f730
feat: Track collection events (#2256)
- Renames `inject_analytics` to `inject_extra` and updates docs
- Manually tracks page views to enable passing custom props
- Tracks copying collection share link and downloading a public
collection

---------

Co-authored-by: emma <hi@emma.cafe>
2025-01-13 15:15:49 -08:00
Tessa Walsh
a031fab313
Backend work for public collections (#2198)
Fixes #2182 

This rather large PR adds the rest of what should be needed for public
collections work in the frontend.

New API endpoints include:

- Public collections endpoints: GET, streaming download
- Paginated list of URLs in collection with snapshot (page) info for
each
- Collection endpoint to set home URL
- Collection endpoint to upload thumbnail as stream
- DELETE endpoint to remove collection thumbnail

Changes to existing API endpoints include:

- Paginating public collection list results
- Several `pages` endpoints that previously only supported `/crawls/` in
their path, e.g. `/orgs/{oid}/crawls/all/pages/reAdd`, now support
`/uploads/` and `/all-crawls/` namespaces as well. This is necessitated
by adding pages for uploads to the database (see below). For
`/orgs/{oid}/namespace/all/pages/reAdd`, `crawls` or `uploads` will
serve as a filter to only affect crawls of that given type. Other
endpoints are more liberal at this point, and will perform the same
action regardless of the namespace used in the route (we'll likely want
to change this in a follow-up to be more consistent).
- `/orgs/{oid}/namespace/all/pages/reAdd` now kicks off a background job
rather than doing all of the computation in an asyncio task in the
backend container. The background job additionally updates collection
date ranges, page/size counts, and tags for each collection in the org
after pages have been (re)added.

Other big changes:

- New uploads will now have their pages read into the database!
Collection page counts now also include uploads
- A migration was added to start a background job for each org that will
add the pages for previously-uploaded WACZ files to the database and
update collections accordingly
- Adds a new `ImageFile` subclass of `BaseFile` for thumbnails that we
can use for other user-uploaded image files moving forward, with
separate output models for authenticated and public endpoints
2025-01-13 15:15:48 -08:00
Ilya Kreymer
a21b2ff0df version: bump to 1.13.2 2025-01-08 22:58:33 -08:00
Tessa Walsh
589819682e
Optionally delay replica deletion (#2252)
Fixes #2170

The number of days to delay file replication deletion by is configurable
in the Helm chart with `replica_deletion_delay_days` (set by default to
7 days in `values.yaml` to encourage good practice, though we could
change this).

When `replica_deletion_delay_days` is set to an int above 0, when a
delete replica job would otherwise be started as a Kubernetes Job,
a CronJob is created instead with a cron schedule set to run yearly,
starting x days from the current moment. This cronjob is then deleted by
the operator after the job successfully completes. If a failed
background job is retried, it is re-run immediately as a Job rather
than being scheduled out into the future again.

---------
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
2024-12-19 18:50:28 -08:00
Ilya Kreymer
2060ee78b4
Support Presigning for use with custom domain (#2249)
If access_endpoint_url is provided:
- Use virtual host addressing style, so presigned URLs are of the form
`https://bucket.s3-host.example.com/path/` instead of
`https://s3-host.example.com/bucket/path/`
- Allow for replacing `https://bucket.s3-host.example.com/path/` ->
`https://my-custom-domain.example.com/path/`, where
`https://my-custom-domain.example.com/path/` is the access_endpoint_url
- Remove old `use_access_for_presign` which is no longer used
- Fixes #2248
- docs: update deployment docs storages section to mention custom storages, access_endpoint_url

---------

Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2024-12-19 18:41:47 -08:00
Ilya Kreymer
60d07762be version: bump to 1.13.1 2024-12-19 12:01:47 -08:00
Ilya Kreymer
cf60c43df2
version: bump to 1.13.0! (#2242) 2024-12-13 20:32:38 -08:00
Ilya Kreymer
74ae3b0f8d
Add new locales (#2240)
- By default, all locales are enabled to make it easy for local deployments to test new locales
- Adds DE, FR, PT locales to make way for translation in Weblate
2024-12-13 19:59:09 -08:00
Emma Segal-Grossman
b650762a45
Allow configuring available languages from helm chart (#2230)
Closes #2223 

- [x] Adds `localesAvailable` to `/api/settings` endpoint, and uses that
list if available, rather than the full list of translated locales, to
determine which options to display to users
- [x] ~~Uses the user's browser locales, filtered to the current
language setting, for formatting numbers, dates, and durations~~
- [x] Adds & persists checkbox for "use same language for formatting
dates and numbers" in user settings
- [x] Replaces uses of `sl-format-bytes` with `localize.bytes(...)`, and
`sl-format-date` with replacement `btrix-format-date` that properly
handles fallback locales
- [x] Caches all number/duration/datetime formatters by a combined key
consisting of app language, browser language, browser setting, and
formatter options so that all formatters can be reused if needed
(previously any formatter with non-default options would be recreated
every render)
- [x] Splits out ordinal formatting from number formatter, as it didn't
make much sense in some non-English locales
- [x] Adds a little demo of date/time/duration/number formatting so you
can see what effect your language settings have
  


https://github.com/user-attachments/assets/724858cb-b140-4d72-a38d-83f602c71bc7

---------

Signed-off-by: emma <hi@emma.cafe>
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
2024-12-13 22:31:26 -05:00
Ilya Kreymer
db39333ef4
Send subscription cancelation email (#2234)
Adds sending a cancellation email when a subscription is cancelled.
- The email may also include an option survey optional survey URL, if
configured in helm chart `survey_url` setting.
- Cancellation e-mail configured in `sub_cancel` e-mail template
- E-mails are sent to all org admins.
- Also adds `trialing_canceled` subscription state to differentiate from
a default `trialing` which will automatically rollover into `active`.
- The email is sent when: a new cancellation date is added for an
`active` subscription, or a `trialing` subscription is changed to to
`trialing_canceled`. (A subscription can be canceled/uncanceled several
times before actual date, and e-mail is sent every time it is canceled.)
- The 'You have X days left of your trial' is also always displayed when
state is in trialing_canceled.

Fixes #2229
---------

Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2024-12-12 11:52:38 -08:00
Emma Segal-Grossman
a65ca49ddd
Plausible analytics (#2226)
Closes #2222 

Adds a runtime script that gets set to either inject the plausible
script tags, or do nothing, that runs at initialization of the frontend
container.
2024-12-10 16:30:22 -08:00
Tessa Walsh
661e5d9fae
Fix issue with failed background job emails not being sent (#2187)
Fixes #2186 

Background job emails will no longer fail to send for jobs unrelated to
file replication or replica deletion.

Also uses `AnyJob` for paginated background job response model, to fix
typing being out of data following addition of other types of background
jobs and lower overhead for adding new ones moving forward.
2024-11-27 17:00:35 -08:00
Ilya Kreymer
50dac7dc50
1.12.2 release -> main (#2181)
Merge 1.12.2 release changes into main, includes:
- Collection replay full refresh on metadata / archived items (#2176)
- Fix for self-registration default org (#2178)
- Prepend missing https in start URL (#2177)
- Updated billing to support free trial messaging (#2179)

---------

Co-authored-by: sua yoo <sua@webrecorder.org>
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Co-authored-by: sua yoo <sua@suayoo.com>
Co-authored-by: SuaYoo <SuaYoo@users.noreply.github.com>
2024-11-26 11:17:07 -08:00
Tessa Walsh
1b1819ba5a
Move org deletion to background job with access to backend ops classes (#2098)
This PR introduces background jobs that have full access to the backend
ops classes and moves the delete org job to a background job.
2024-10-10 14:41:05 -04:00
Ilya Kreymer
84a74c43a4 version: bump to 1.13.0-beta.0 2024-10-10 11:38:13 -07:00
Ilya Kreymer
c33f749515
Frontend hosted-docs (#2107)
Fixes #2106 

Docs are now hosted as part of the frontend at `/docs` by default.

- If `docs_url` is set in the helm chart, the `/docs` endpoint will
redirect to that endpoint instead
- Use multi-stage python image to build mkdocs as part of frontend, then
copy static output
- Dir layout: mkdocs.yml and docs into frontend/docs
- CI: Update docs build GH action to use new path
- Update all frontend paths to use `/docs/` instead of
`https://docs.browsertrix.com/`

---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
2024-10-08 14:56:34 -07:00
Ilya Kreymer
8192e5bed6 version: bump to 1.12.0 2024-10-03 16:45:54 -07:00
Vinzenz Sinapius
bb6e703f6a
Configure browsertrix proxies (#1847)
Resolves #1354

Supports crawling through pre-configured proxy servers, allowing users to select which proxy servers to use (requires browsertrix crawler 1.3+)

Config:
- proxies defined in btrix-proxies subchart
- can be configured via btrix-proxies key or separate proxies.yaml file via separate subchart
- proxies list refreshed automatically if crawler_proxies.json changes if subchart is deployed
- support for ssh and socks5 proxies
- proxy keys added to secrets in subchart
- support for default proxy to be always used if no other proxy configured, prevent starting cluster if default proxy not available
- prevent starting manual crawl if previously configured proxy is no longer available, return error
- force 'btrix' username and group name on browsertrix-crawler non-root user to support ssh

Operator:
- support crawling through proxies, pass proxyId in CrawlJob
- support running profile browsers which designated proxy, pass proxyId to ProfileJob
- prevent starting scheduled crawl if previously configured proxy is no longer available

API / Access:
- /api/orgs/all/crawlconfigs/crawler-proxies - get all proxies (superadmin only)
- /api/orgs/{oid}/crawlconfigs/crawler-proxies - get proxies available to particular org
- /api/orgs/{oid}/proxies - update allowed proxies for particular org (superadmin only)
- superadmin can configure which orgs can use which proxies, stored on the org
- superadmin can also allow an org to access all 'shared' proxies, to avoid having to allow a shared proxy on each org.

UI:
- Superadmin has 'Edit Proxies' dialog to configure for each org if it has: dedicated proxies, has access to shared proxies.
- User can select a proxy in Crawl Workflow browser settings
- Users can choose to launch a browser profile with a particular proxy
- Display which proxy is used to create profile in profile selector
- Users can choose with default proxy to use for new workflows in Crawling Defaults

---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
2024-10-02 18:35:45 -07:00
Ilya Kreymer
c242bb96d2 version: bump to 1.12.0-beta.0 2024-09-12 14:30:15 -07:00
Ilya Kreymer
1f919de294
Allow custom auto-resize crawler volume ratio adjustable (#2076)
Make the avail / used storage ratio (for crawler volumes) adjustable.
Disable auto-resize if set to 0.
Follow-up to #2023
2024-09-12 09:28:19 -07:00
sua yoo
4c36c80351
feat: Display scale as number of browser windows (#2057)
Resolves https://github.com/webrecorder/browsertrix/issues/2048

---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
2024-09-05 17:32:40 -07:00
Ilya Kreymer
b3c1195878 version: bump to 1.11.6 2024-09-05 17:31:10 -07:00
Ilya Kreymer
ea252e8da9 version: bump to 1.11.5 2024-08-27 10:00:53 -07:00
sua yoo
337454f8c9
feat: Add link to hosted sign-up page (#2045)
Resolves https://github.com/webrecorder/browsertrix/issues/2043

<!-- Fixes #issue_number -->

### Changes

- Shows link to sign up in UI if `sign_up_url` is configured.
- Expires settings in session storage (for now)
2024-08-26 17:26:25 -07:00
Ilya Kreymer
95969ec747
Attempt to auto-adjust storage if usage is running out while crawl is running (#2023)
Attempt to auto-adjust PVC storage if:
- used storage (as reported in redis by the crawler) * 2.5 >
total_storage
- will cause PVC to resize, if possible (not supported by all drivers)
- uses multiples of 1Gi, rounding up to next GB
- AVAIL_STORAGE_RATIO hard-coded to 2.5 for now, to account for 2x space
for WACZ plus change for fast updating crawls

Some caveats:
- only works if the storageClass used for PVCs has
`allowVolumeExpansion: true`, if not, it will have no effect
- designed as a last resort option: the `crawl_storage` in values and
`--sizeLimit` and `--diskUtilization` should generally result in this
not being needed.
- can be useful in cases where a crawl is rapidly capturing a lot of
content in one page, and there's no time to interrupt / restart, since
the other limits apply only at page end.
- May want to have crawler update the disk usage more frequently, not
just at page end to make this more effective.
2024-08-26 14:19:20 -07:00
Ilya Kreymer
135c97419d version: update to 1.11.4 2024-08-26 12:31:56 -07:00
Ilya Kreymer
8ff1ad39a7 version: bump to 1.11.3 2024-08-08 15:16:18 -07:00
Ilya Kreymer
ed9038fbdb version: bump to 1.11.2 2024-08-07 12:37:26 -07:00