- Fixes empty watch tab during some active states
- Disables exclusion and browser window editing while a crawl is
stopping
- Refactors frontend crawl states to match backend
- Replaces QA Review breadcrumbs with "Back" button that takes you back
to the archived item
- Crawl breadcrumbs always point to workflow
- Shows archived item icon next to item name to differentiate from
workflow
- Adds "Edit Workflow" button to "Crawl Settings"
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Co-authored-by: Tessa Walsh <tessa@bitarchivist.net>
Fixes https://github.com/webrecorder/browsertrix/issues/2064
### Changes
- Switches MDE library to one that supports shadow DOM
- Refactors collection components to btrix components
- Fixes collection detail not expanding and contracting correctly
- Adds breadcrumb navigation to all detail views, returning to correct
origin for workflow and collection items
- Refactors list page headers into layout utility
- Refactors crawl tab labels and renames "Files" tab to "WACZ Files"
- Adds new org settings tab for updating crawl details
- Refactors `workflow-editor` to move out utility functions
- Updates user guide on org settings
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Co-authored-by: Ilya Kreymer <ikreymer@gmail.com>
Improves time to first load an org with the following:
- Users user info from login response to set org slug and route user on
log in
- Stores user info in session storage so that it's available on reload
- Stores app settings in local storage until user logs out
- Loads critical org components synchronously
WIP for https://github.com/webrecorder/browsertrix/issues/2041
<!-- Fixes #issue_number -->
### Changes
- Adds button to open embedded support guide
- Adds link to help forum
- Refactors app bar to look nicer on smaller screens
- Updates workflow job type copy and adds additional clarifying text
- Changes "List of URLs" label to "Crawl URL(s)"
- Refactors `NewWorkflowDialog` into tailwind element
- Renames "Org Settings" -> "Settings"
- Reduces gap between settings panel heading and panel
- Always show "Pending Invites" section and update heading styles to
match panel heading
- Update "Current Plan" and "Usage History" sections to be on the same
hierarchical level under "Billing"
- Refactors `<btrix-org>` to move `isAdmin` and `isCrawler` helpers to
app state
Also improves the slug editing experience by partially-slugifying the
value as it's entered.
Previously, submitting a org slug value of ".." or similar would cause
the frontend to redirect to a "page not found" page, with all accessible
links leading to only `/account/settings`. This also causes the backend
to reset the org slug to one generated from the org name on a reload.
---------
Co-authored-by: sua yoo <sua@webrecorder.org>
* Fixes issue in FailedLogin model:
- fix data-model to remove nested 'attempted.attempted'
- migrate existing data to remove nested field
* Also, avoid setting dt_now() in model as that results in fixed date for
all objects:
- update FailedLogin to update 'attempted' date on every attempt
- also update PageNote object to set date in constructor
* Update text for too many logins to make it clear it is set only if its a
valid email
* fixes#2001
Following https://github.com/webrecorder/browsertrix/pull/1995, we want
to keep the usage history table on the dashboard for now for video
demos.
### Changes
- Adds usage history table back to org dashboard
- Makes "No usage history" message more apparent
Tweaks to how execution time is tracked for more accuracy + excluding
waiting states:
- don't update if crawl state is in a 'waiting state' (waiting for
capacity or waiting for org limit)
- rename start states -> waiting states for clarity
- reset lastUpdatedTime if two consecutive updates of non-running state,
to ensure non-running states don't count, but also account for
occasional hiccups -- if only one update detects non-running state,
don't reset
- webhooks: move start webhook to when crawl actually starts for first
time (db lastUpdatedTime is not yet + crawl is running)
- don't set lastUpdatedTime until pods actually running
- set crawljob update interval to every 10 seconds for more accurate
execution time tracking
- frontend: show seconds in 'Execution Time' display
- Follow-up to #1914, allows SubscriptionUpdate event to also update
quotas.
- Passes current usage info + current billing page URL to portalUrl
request for external app to be able to respond with best portalUrl
- get_origin() moved to utils to be available more generally.
- Updates billing tab to show current plans, switches order of quotas to
list execution time, storage first
an org is made read-only while crawls are running:
- treat similar to other stopped_* states, do a graceful stop
- update UI to display "Stopped: Crawling Disabled" for this status
- don't add corresponding skipped status - just skip running crawls if org is read-only
Resolves https://github.com/webrecorder/browsertrix/issues/1951
### Changes
- Shows date org was created in superadmin org list
- Visually differentiates unnamed org ID
- Adds "Admin" badge to app header to make current login more apparent
- Fixes logic to show "create org" dialog if there are no orgs in an
instance
- Refactors `btrix-home` to remove unused references to non-superadmin
org list
---------
Co-authored-by: Henry Wilkinson <henry@wilkinson.graphics>
Resolves https://github.com/webrecorder/browsertrix/issues/1912
### Changes
- Show support email, if available, in invalid invite error message
- Separate error message for invite email that doesn't match current
user's
Fixes#1968
Changes:
- `stopped_quota_reached` and `skipped_quota_reached` migrated to new
values that indicate which quota was reached
- Before crawls are run, the operator checks if storage or exec mins
quotas are reached and if so fails the crawl with the appropriate state
of `skipped_storage_quota_reached` or `skipped_time_quota_reached`
- While crawls are running, the operator checks if the exec mins quota
is reached or if the size of all running crawls will mean the storage
quota is reached once uploaded; if so, the crawl is stopped gracefully
and given `stopped_storage_quota_needed` or `stopped_time_quota_reached`
state as appropriate
- Adds new nightly tests for enforcing storage quota