- set WEB_CONCURRENCY env var to configure number of backend api workers for both docker and k8s
- set via 'backend_workers' in values.yaml
- also add 'rwp_base_url' to values.yaml
- update containers to use public webrecorder/browsertrix-backend and webrecorder/browsertrix-frontend containers
- make liveness, readiness and startup health checks more tolerant
- set resource mem and cpu requests/limits for all used services (not minio for now)
- add readiness proble to redis, mongo
- adjust crawler limits, set via configmap
- add 'emptyDir' volume for crawl directory (to allow any pod restarts to have access to the data)
- rename minio and redis volumes to avoid any confusion
- add pod termination grace-period (default to 600 secs)
* backend fixes: fix graceful stop + stats
- use redis to track stopping state, to be overwritten when finished
- also include stats in completed crawls
- docker: use short container id for crawl id
- graceful stop returns 'stopping_gracefully' instead of 'stopped_gracefully'
- don't set stopping state when complete!
- beginning files support: resolve absolute urls for crawl detail (not pre-signing yet)
* backend:
- refactor invite system, move to separate InviteOps object, used by archives and user
- supporting three invite use cases:
1) superuser invites any user not registered, not added to any archive
2) archive admin invites any user not registered, add to one of their archives
3) archive admin invites existing registered user, add to one of their archives
- support superadmin invite via /users/invite (fixes#37)
- superadmin invite has no archive set and does not add user to archive
- don't send verification email when accepting from invite, fixes#50
- use different email template / accept url for existing user invite, eg, `/invite/accept/`
- fix default token value in chart
* backend:
- add /api/settings endpoint for misc system-wide settings
- setting 'registrationEnabled' if open registration should be enabled, set via REGISTRATION_ENABLED=1 env var
- setting 'jwtTokenLifetimeMinutes' returns the jwt token expiry in seconds, configured in minutes via JWT_TOKEN_LIFETIME_MINUTES env var (default: 60)
* support running backend + frontend together on k8s
* split nginx container into separate frontend service, which uses nignx-base image and the static frontend files
* add nginx-based frontend image to docker-compose build (for building only, docker-based combined deployment not yet supported)
* backend:
- fix paths for email templates
- chart: support '--set backend_only=1' and '--set frontend_only=1' to only force deploy one or the other
- run backend from root /api in uvicorn
* k8s: support email configuration
support sending reset password email
fix for #32
* fastapi users: update to latest (8.1.2)
send verification email upon registration
* update to latest fastapi-users(8.1.2), refactor to use UserManager class
ensure verification e-mail sent upon registration, w/o requiring separate apicall
fixes#32
* add email options to default chart/values.yaml
* separate usermanager init from fastapi users init, fix for sending invite emails
* misc backend fixes:
- fix running w/o local minio
- ensure crawler image pull policy is configurable, loaded via chart value
- use digitalocean repo for main backend image (for now)
- add bucket_name to config only if using default bucket
* enable all behaviors, support 'access_endpoint_url' for default storages
* debugging: add 'no_delete_jobs' setting for k8s and docker to disable deletion of completed jobs
support screencasting to dynamically created service via nginx (k8s only thus far)
add crawl /watch endpoint to enable watching, creates service if doesn't exist
add crawl /running endpoint to check if crawl is running
nginx auth check in place, but not yet enabled
add k8s nginx.conf
add missing chart files
file reorg: move docker config to configs/
k8s: add readiness check for nginx and api containers for smoother reloading
ensure service deleted along with job
todo: update dockerman with screencast support
move mongo into separate optional deployment along with minio
support for configuring storages
support for deleting crawls, associated config and secrets
- working apis for adding crawls, removing crawls in mongo, mapped to k8s cronjobs
- more complete crawl spec
- option to start on-demand job from cronjobs
- optional minio in separate deployment/service