Closes#1405
- Properly uses `typescript-eslint`: we were missing the preset from it,
so some of the default `eslint` rules (that don't properly work with
typescript) were being applied and causing false positives
- I also moved the `eslint` config into its own file, and enabled
`typescript-eslint`'s type-awareness, so that we can enable more
type-aware rules in the future if we like
- Adds `ts-lit-plugin` to the typescript config, which _hopefully_ will
allow us to catch issues during build (in CI)
- It looks like `ts-lit-plugin` is sort of abandonware at the moment,
and unfortunately _doesn't_ actually work for this purpose right now,
but the lit team is working on a replacement here:
https://www.npmjs.com/package/@lit-labs/analyzer
- Adds `fork-ts-checker-webpack-plugin`, which allows the typescript
checking process to be run on a separate forked thread in Webpack, which
can help speed up builds & checking
- Enables incremental type checking for better speed
- Fixes a whole bunch of `eslint`-auto-fixable issues (unused imports
and variables, some type issues, etc)
- Fixes a bunch of `lit-analyzer` issues (mostly attribute naming, some
type issues as well)
- Fixes various other type issues:
- Improves type safety in a bunch of places, notably anywhere `apiFetch`
and `APIPaginatedList` are used
- Removes some `any`s
- Adds confirmation dialog for workflow crawls
- Changes archived item confirmation from default browser dialog to
shoelace dialog
- Increase dialog title size
- Out of scope: Localizes other workflow detail confirmation buttons
- Out of scope: Reword missed "Archive" reference in file uploader
Closes#1294
### Changes
- `crawl-list` component
- Adds a check if there are any items in the actions menu. If not, skip
rendering the actions menu.
- This allows us to give the component no actions! Currently required to
remove them for viewers!
- Collection Details
- Hides "Remove from Collection" option for viewers
- Crawls List
- Removes the single "View Crawl Details" option from archived items for
viewers
- All the other actions were already set up correctly to be used by all
roles!
- Dashboard
- Hides org settings gear icon button unless the user is an admin
- Hides "Create New" dropdown for viewers
- Workflow Details
- Hides workflow edit icon button for viewers
- Hides the "Delete Crawl" option in archived items for viewers
- Hides the "Run Crawl" option for viewers
- Workflow List
- Hides all edit-related options for viewers, the only option now is
copying tags
- Removes the deactivate / delete options (were only visible when
running a crawl) in the workflow list actions
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: sua yoo <sua@suayoo.com>
Closes#1351
### Changes
- Removes the collection share access icon's header icon which was just
an icon. This is now mostly in line with how we display status icons in
the archived items list (there is a spacing difference between the two
lists regarding the placement of the icon vs the label and its alignment
with the text (here) vs the icon (archived items list).
- Adds two new crawl finished state, stopped_by_user and
stopped_quota_reached
- Tracking other possible 'stop reasons' in operator, though not making
them distinct states for now.
- Updated frontend with 'Stopped by User' and 'Stopped: Time Quota
Reached', shown with same icon as current partial_complete
- Added migration of partial_complete to either stopped_by_user or
complete (no historical quota data available)
- Addresses edge case in scaling: if crawl never scaled (no redis entry,
no pod), automatically scale down
- Edge case in status: if crawl is somehow 'canceled' but not deleted,
immediately delete crawl object and begin finalizing.
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
- instead of restarting crawler when exclusion added/removed, add a
message to a redis list (per crawler instance)
- no longer filtering existing queue on backend, now handled via crawler (implemented in 0.12.0 via webrecorder/browsertrix-crawler#408)
- match response optimization: instead of returning first 1000 matches,
limits response to 500K and returns however many matches fit in that
response size (for optional pagination on frontend)
Fixes#1261Closes#1092
The quota for monthly execution minutes is treated as a hard cap. Once
it is exceeded, an alert indicating that an org has exceeded its monthly
execution minutes will display and the user will be unable to start new
crawls. Any running crawls will be stopped once the quota is exceeded.
An execution minutes meter bar is also added in the Org Dashboard and
displayed if a quota is set. More detail in #1305 which was
merged into this branch.
## Changes
- Enable setting 'maxExecMinutesPerMonth' in orgs list quotas by superadmin
- Enforce quota by stopping crawls in operator once quota is reached
- Show alert banner once execution time quota is hit:
- Once quota is hit, disable Run Crawl buttons in frontend, return 403
message with `exec_minutes_quota_reached` detail in backend from
crawl config `/run` endpoint, and don't run new workflows on creation
(similar to storage quota)
- Display execution time for crawls in the crawl details overview,
immediately below
- Show execution minutes meter on dashboard (from #1305)
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Co-authored-by: sua yoo <sua@webrecorder.org>
Partially resolves#1223, fixes#1298
- Adds crawl usage table in dashboard under metrics
- Shows skeleton loading indicator when metrics are loading (@Shrinks99
feel free to adjust how this looks)
- Shows max number of concurrent crawls running if any are running ("`running` / `max` Crawls Running")
- Replaces org UUID in URL/browser location bar with org slug.
- Refactor: Adds shared app state utility using https://sijakret.github.io/lit-shared-state/ to
access org data from deep descendants.
- Backwards compatible: org UUID URLs should auto-redirect to org slug URLs.
- Show the org UUID in org settings general tab for use with APIs
(Resolves#1258, Follows #1279)
Fast follower https://github.com/webrecorder/browsertrix-cloud/pull/1276
Updates label, info text, and preview text for org slug field to be more user-friendly
use 'Custom URL Identifier' and 'Custom your organization's web address for accessing Browsertrix Cloud'
---------
Co-authored-by: Ilya Kreymer <ikreymer@users.noreply.github.com>
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Closes#1273
- Viewers can see the share button and the dialogue's sharing info if the collection is sharable
- Viewers can't see or change the share toggle
- Viewers can't see the share button if the collection is not sharable
- Allows editing of org slugs (actual URL updates will be handled in
https://github.com/webrecorder/browsertrix-cloud/issues/1258.)
- Converts user input to slug using slugify
- Adds help text to org name and slug
- Renames tab from "information" to "general" settings
- Adds `position: sticky` to the workflow creator / editor controls to
affix them to the bottom of the screen, they are now always visible!
- Renames "Extra URLs in Scope" to "Extra URL Prefixes in Scope"
- Updates documentation accordingly
- Adjusts casing for checkboxes
- Adds the multiplication sign to the crawler instances settings to
better communicate that they are increases in scale and not arbitrary
numbers.
- Ensures need-login event bubbles until handled
- Redirects on 401 from /refresh endpoint
- Go to previous URL upon login, rather than always to home page
- Shows accurate login notification (rather than less precise "couldn't retrieve org" or similar message)
- Limits URL list entry to 1,000 URLs
- Limits additional URL list entry to 100 URLs
- Shows first invalid URL in list in error message
- Quick and dirty fix for long URLs wrapping: Show URLs in list on one line, with entire container scrolling
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
* Give protocol selection box smaller max-width
* Add warning and docs link to browser profile creation
- Updates dialog styling to btrix dialog
- Updates button sizes
- Updates button placement in dialog
- Updates button labels for consistency with other buttons in app
- Updates docs page with new button labels
* Update browser profile edit metadata dialog. Matches updated dialog shown on profile creation
* Open docs page in new tab
- If set, and any of the seeds fails, the entire crawl is marked as a failure.
- Add checkbox which adds --failOnFailedSeed checkbox to URL list workflows
- Add 'Fail Crawl On Failed URL' to crawl workflow setup docs
- Applies user permissions check before deleting anything in all /delete endpoints
- Shuts down running crawls before deleting anything in /all-crawls/delete as well as /crawls/delete
- Splits delete_list.crawl_ids into crawls and upload lists at same time as checks in /all-crawls/delete
- Updates frontend notification message to Only org owners can delete other users' archived items. when a crawler user attempts to delete another users' archived items
- Adds "Logs" tab to workflow detail
- Shows error logs in expandable section in "Watch" tab
- Show corresponding message (no logs yet or logs temporarily unavailable) when `/errors` returns 503 based on crawl state
- text tweaks: use error logs instead of logs, change 'crawl start' -> 'crawl continue' in log message
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
- Remove config.seeds from workflow and crawl detail endpoints
- Add new paginated GET /crawls/{crawl_id}/seeds and /crawlconfigs/{cid}/seeds endpoints to retrieve seeds for a crawl or workflow
- Include firstSeed in GET /crawlconfigs/{cid} endpoint (was missing before)
- Modify frontend to fetch seeds from new /seeds endpoints with loading indicator
---------
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
- Replaces individual "New" buttons in home page with dropdown button in header (includes Crawl Workflow, Upload Collection, Browser Profile)
- Refactors required step of new workflow and new collection into dialog
* Reorders actions, adds tooltip
- All copy buttons on the collection share dialog are now on the right side
- Adds a tooltip to tell the user the button opens the link in a new tab
* Make vertical `dec-list` items fill 100% width of their parent container
- Allows for better placement of items within the container
- Adds horizontal padding to info bars
* Right align copy button in item details page
* Remove config from list endpoints
- Remove config field from workflow and crawl list endpoints
- Add seedCount to CrawlConfigOut on backend and Workflow on frontend
- Refactor CrawlConfig and CrawlConfigOut to extend CrawlConfigCore + CrawlConfigAdditional
- Refactor workflow list in frontend to use firstSeed and seedCount
- Frontend uses ListWorkflow type which is Omit<Workflow, "config">
- Moves metadata tab to first position
- Adds save button to each section, stays in edit view on saving
- Validates name exists before moving to next section or saving
- Changes save button text to "Create Collection without Items" if crawl/uploads aren't selected in new collection
- Fix server error not showing in UI
* update text
* remove trailing slash removal
* make scope help text responsive as user types
---------
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
* use metacontroller's decoratorcontroller to create CrawlJob from Job
* scheduled job work:
- use existing job name for scheduled crawljob
- use suspended job, set startTime, completionTime and succeeded status on job when crawljob is done
- simplify cronjob template: remove job_image, cron_namespace, using same namespace as crawls,
placeholder job image for cronjobs
* move storage quota check to crawljob handler:
- add 'skipped_quota_reached' as new failed status type
- check for storage quota before checking if crawljob can be started, fail if not (check before any pods/pvcs created)
* frontend:
- show all crawls in crawl workflow, no need to filter by status
- add 'skipped_quota_reached' status, show as 'Skipped (Quota Reached)', render same as failed
* migration: make release namespace available as DEFAULT_NAMESPACE, delete old cronjobs in DEFAULT_NAMESPACE and recreate in crawlers namespace with new template