Fixes#2753
- Adds `saveStorage` to `RawCrawlConfig` model in backend
- Adds option to Browser Settings pane of workflow editor
- Adds option to config details component
- Adds setting to docs
- Use latest crawler image for tests
- Due to webrecorder/browsertrix-crawler#861 change, a crawl with no
successful pages should be treated as failed. Update fixture to allow
both failed or complete state for backwards compatibility for now.
Closes#2774
## Changes
- Allows badges to expand in height when necessary
- Fixes `variant` type not including `"blue"` variant
- Fixes missing background for `"neutral"` variant
This PR adds a new checkbox to both page and seed crawl workflow types,
which will fail the crawl if behaviors detect the browser is not logged
in for supported sites.
Changes include:
- Backend support for the new crawler flag
- A new `failed_not_logged_in` crawl state
- Checkbox workflow editor and config details in the frontend (currently
in the Scope section - I think it makes sense to have this option up
front, but worth considering)
- User Guide documentation of new option
- A new nightly test for the new workflow option and
`failed_not_logged_in` state
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: sua yoo <sua@webrecorder.org>
Resolves https://github.com/webrecorder/browsertrix/issues/2764
## Changes
Uses crawl first seed as starting URL instead of workflow first seed to
fix replay after saving workflow without running.
## Manual testing
1. Log in as crawler
2. Create a new workflow and crawl a single page, for example,
https://example.com/
3. Edit the workflow to change starting URL to https://example.org/
4. Save without running
5. Go to latest crawl tab, verify replay loads https://example.com/
Since seedfile deletion checks that the seedfile is not used in any
workflow, it should be deleted after the workflow is removed.
noticed in checking #2744
Resolves#2646
Depends on #2710
## Changes
(Copied from #2689)
- Allows users to specify URL list as file.
- Allow uploading a text file of URLs
- Allow specifying >100 URLs into URL list, where they will turn into an uploaded list automatically.
---------
Co-authored-by: sua yoo <sua@suayoo.com>
Fixes#2673
Changes in this PR:
- Adds a new `file_uploads.py` module and corresponding `/files` API
prefix with methods/endpoints for uploading, GETing, and deleting seed
files (can be extended to other types of files moving forward)
- Seed files are supported via `CrawlConfig.config.seedFileId` on POST
and PATCH endpoints. This seedFileId is replaced by a presigned url when
passed to the crawler by the operator
- Seed files are read when first uploaded to calculate `firstSeed` and
`seedCount` and store them in the database, and this is copied into the
workflow and crawl documents when they are created.
- Logic is added to store `firstSeed` and `seedCount` for other
workflows as well, and a migration added to backfill data, to maintain
consistency and fix some of the pymongo aggregations that previously
assumed all workflows would have at least one `Seed` object in
`CrawlConfig.seeds`
- Seed file and thumbnail storage stats are added to org stats
- Seed file and thumbnail uploads first check that the org's storage
quota has not been exceeded and return a 400 if so
- A cron background job (run weekly each Sunday at midnight by default,
but configurable) is added to look for seed files at least x minutes old
(1440 minutes, or 1 day, by default, but configurable) that are not in
use in any workflows, and to delete them when they are found. The
backend pods will ensure this k8s batch job exists when starting up and
create it if it does not already exist. A database entry for each run of
the job is created in the operator on job completion so that it'll
appear in the `/jobs` API endpoints, but retrying of this type of
regularly scheduled background job is not supported as we don't want to
accidentally create multiple competing scheduled jobs.
- Adds a `min_seed_file_crawler_image` value to the Helm chart that is
checked before creating a crawl from a workflow if set. If a workflow
cannot be run, return the detail of the exception in
`CrawlConfigAddedResponse.errorDetail` so that we can display the reason
in the frontend
- Add SeedFile model from base UserFile (former ImageFIle), ensure all APIs
returning uploaded files return an absolute pre-signed URL (either with external origin or internal service origin)
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
- Fix race condition related to browser commit time
- The profile commit request waits for browser to actual finish, and
profile saved. This can cause request to time out, resulting in a retry,
in which the browser has already been closed.
- With these changes, the commit is now 'idempotent' and returns a
waiting_for_browser until the profile is actually committed.
- On frontend, keep pinging commit endpoint with a timeout while 'waiting_for_browser' is returned, actual committed when endpoint returns profile id.
---------
Co-authored-by: sua yoo <sua@suayoo.com>
Resolves#2718
## Changes
- Enables manual QA review for successfully finished crawls.
- Individual pages and full crawl can be reviewed without assistive QA running
- Show replay, screenshot and text without comparison if no assistive QA yet.
- follow-up to: #2736: remove '^' custom prefix URLs to avoid accumulating '^' via utility function
- Show URL prefix list in settings for custom prefix scope.
- Update user guide with correct custom prefix field.
---------
Co-authored-by: sua yoo <sua@webrecorder.org>
Fixes#2737
- Moves webhook-related tests to run nightly, to speed up CI runs and
avoid the periodic failures we've been getting lately.
- Also ensures all try/except blocks that have time.sleep in the 'try' also have a time.sleep in 'except'
to avoid fast-looping retries
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
- Automatically compute prefix from starting URL, if no other prefix is
set in custom prefix mode.
- Ensure each prefix is actually a prefix: add '^' to each custom prefix
URL, as include URL path is a regex
- rename 'Extra URL Prefixes' to just 'URL Prefixes' and adjust help
text to indicate that the prefix list is the list that is in scope
- fixes#2735, follow up to #2722
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Co-authored-by: sua yoo <sua@webrecorder.org>
## Changes
- Deletes and rewrites arrays in URL search params in workflow list when
editing array filters (i.e. tags & profiles)
- Removes a missed `console.log`
- bump to 1.17.3
cc @SuaYoo
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Fixes#2721
This PR removes frontend logic that set the seed-level scopeType for
custom page prefix workflows to `prefix`, which was causing the scope to
balloon larger than what users intended for some workflows.
- don't use a persistent volume for /tmp, instead use a temporary
emptyDir
- use volume to avoid permission issues with default /tmp dir
- follow-up to #2623
Resolves https://github.com/webrecorder/browsertrix/issues/2660
## Changes
- Enables filtering workflow list by tag
- Displays tags near workflow name in detail view
- Adds `<btrix-filter-chip>` component
- Migrates "schedule state", "only running", and "only mine" filters
- Adds basic documentation to Storybook
---------
Co-authored-by: Emma Segal-Grossman <hi@emma.cafe>
Connected to #2661
- Removes crawl workflows from being returned as part of the profile
response.
- Frontend: removes display of workflows in profile details.
- Adds 'inUse' flag to all profile responses to indicate profile is in
use by at least one workflow
- Adds 'profileid' as possible filter for workflows search in
preparation for filtering by profile id (#2708)
- Make 'profile_in_use' a proper error (returning 400) on profile
delete.
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>