Nightly test fix for crawler 1.7.0 (#2789)
- Use latest crawler image for tests - Due to webrecorder/browsertrix-crawler#861 change, a crawl with no successful pages should be treated as failed. Update fixture to allow both failed or complete state for backwards compatibility for now.
This commit is contained in:
parent
b739de419c
commit
5a4add84a8
@ -306,7 +306,7 @@ def error_crawl_id(admin_auth_headers, default_org_id):
|
||||
headers=admin_auth_headers,
|
||||
)
|
||||
data = r.json()
|
||||
if data["state"] == "complete":
|
||||
if data["state"] in ("failed", "complete"):
|
||||
return crawl_id
|
||||
time.sleep(5)
|
||||
|
||||
|
@ -24,7 +24,7 @@ crawler_channels:
|
||||
image: "docker.io/webrecorder/browsertrix-crawler:latest"
|
||||
|
||||
- id: test
|
||||
image: "docker.io/webrecorder/browsertrix-crawler:1.7.0-beta.0"
|
||||
image: "docker.io/webrecorder/browsertrix-crawler:latest"
|
||||
|
||||
mongo_auth:
|
||||
# specify either username + password (for local mongo)
|
||||
|
Loading…
Reference in New Issue
Block a user