8.1 KiB
Crawl Workflow Setup
The first step in creating a new crawl workflow is to choose what type of crawl you want to run. Crawl types are fixed and cannot be converted or changed later.
URL List
{ .badge-blue }- The crawler visits every URL specified in a list, and optionally every URL linked on those pages.
Seeded Crawl
{ .badge-orange }- The crawler automatically discovers and archives pages starting from a single seed URL.
Scope
List of URLs
URL List
{ .badge-blue } Seeded Crawl
{ .badge-orange }
This list informs the crawler what pages it should capture as part of a URL List crawl.
It is also available under the Additional URLs section for Seeded Crawls where it can accept arbitrary URLs that will be crawled regardless of other scoping rules.
Include Any Linked Page
URL List
{ .badge-blue }
When enabled, the crawler will visit all the links it finds within each page defined in the List of URLs field.
??? tip "Crawling tags & search queries with URL List crawls"
This setting can be useful for crawling the content of specific tags or searh queries. Specify the tag or search query URL(s) in the List of URLs field, e.g: https://example.com/search?q=tag
, and enable Include Any Linked Page to crawl all the content present on that search query page.
Crawl Start URL
Seeded Crawl
{ .badge-orange }
This is the first page that the crawler will visit. It's important to set Crawl Start URL that accurately represents the scope of the pages you wish to crawl as the Start URL Scope selection will depend on this field's contents.
You must specify the protocol (likely http://
or https://
) as a part of the URL entered into this field.
Start URL Scope
Seeded Crawl
{ .badge-orange }
Hashtag Links Only
- This scope will ignore links that lead to other addresses such as
example.com/path
and will instead instruct the crawler to visit hashtag links such asexample.com/#linkedsection
.This scope can be useful for crawling certain web apps that may not use unique URLs for their pages.
Pages in the Same Directory
- This scope will only crawl pages in the same directory as the Crawl Start URL. If
example.com/path
is set as the Crawl Start URL,example.com/path/path2
will be crawled butexample.com/path3
will not. Pages on This Domain
- This scope will crawl all pages on the domain entered as the Crawl Start URL however it will ignore subdomains such as
subdomain.example.com
. Pages on This Domain and Subdomains
- This scope will crawl all pages on the domain and any subdomains found. If
example.com
is set as the Crawl Start URL, both pages onexample.com
andsubdomain.example.com
will be crawled. Custom Page Prefix
- This scope will crawl all pages that begin with the Crawl Start URL as well as pages from any URL that begin with the URLs listed in
Extra URLs in Scope
Max Depth
Seeded Crawl
{ .badge-orange }
Only shown with a Start URL Scope of Pages on This Domain
and above, the Max Depth setting instructs the crawler to stop visiting new links past a specified depth.
Extra URLs in Scope
Seeded Crawl
{ .badge-orange }
Only shown with a Start URL Scope of Custom Page Prefix
, this field accepts additional URLs or domains that will be crawled if URLs that lead to them are found.
This can be useful for crawling websites that span multiple domains such as example.org
and example.net
Include Any Linked Page ("one hop out")
Seeded Crawl
{ .badge-orange }
When Enabled, the crawler will visit all the links it finds within each page, regardless of the Start URL Scope setting.
This can be useful for capturing links on a page that lead outside the website that is being crawled but should still be included in the archive for context.
Exclusions
URL List
{ .badge-blue } Seeded Crawl
{ .badge-orange }
The exclusions table will instruct the crawler to ignore links it finds on pages where all or part of the link matches an exclusion found in the table. The table is only available in URL List crawls when Include Any Linked Page is enabled.
This can be useful for avoiding crawler traps — sites that may automatically generate pages such as calendars or filter options — or other pages that should not be crawled according to their URL.
Matches text
- Will perform simple matching of entered text and exclude all URLs where matching text is found.
e.g: If
about
is entered,example.com/aboutme/
will not be crawled. Regex
- Regular expressions (Regex) can also be used to perform more complex matching.
e.g: If
#!regex \babout\/?\b
is entered,example.com/about/
will not be crawled howeverexample.com/aboutme/
will be crawled.
Limits
Page Load Timeout
Limits amount of time to wait for a page to load. Behaviors will run after this timeout only if the page is partially or fully loaded.
Behavior Timeout
Limits how long behaviors can run on each page.
Auto Scroll Behavior
When enabled, the browser will automatically scroll to the end of the page.
Delay Before Next Page
Waits on the page for a set period of time after any behaviors have finished running. This can be helpful to avoid rate limiting however it will slow down your crawl.
Max Pages
Adds a hard limit on the number of pages that will be crawled. The crawl will be gracefully stopped after this limit is reached.
Crawl Time Limit
The crawl will be gracefully stopped after this set period of time.
Crawler Instances
Increasing the amount of crawler instances will speed up crawls by using additional browser windows to capture more pages in parallel. This will also increase the amount of traffic sent to the website and may result in a higher chance of getting rate limited.
Browser Settings
Browser Profile
Sets the Browser Profile to be used for this crawl.
Block Ads by Domain
Will prevent any content from the domains listed in Steven Black's Unified Hosts file (ads & malware) from being captured by the crawler.
Language
Sets the browser's language setting. Useful for crawling websites that detect the browser's language setting and serve content accordingly.
Scheduling
Crawl Schedule Type
Run Immediately on Save
- When selected, the crawl will run immediately as configured. It will not run again unless manually instructed.
Run on a Recurring Basis
- When selected, additional configuration options for instructing the system when to run the crawl will be shown. If a crawl is already running when the schedule is set to activate it, the scheduled crawl will not run.
No Schedule
- When selected, the configuration options that have been set will be saved but the system will not do anything with them unless manually instructed.
Frequency
Set how often a scheduled crawl will run.
Day
Sets the day of the week for which crawls scheduled with a Weekly
Frequency will run.
Date
Sets the date of the month for which crawls scheduled with a Monthly
Frequency will run.
Start Time
Sets the time that the scheduled crawl will start according to your current timezone.
Also Run a Crawl Immediately On Save
When enabled, a crawl will run immediately on save as if the Run Immediately on Save
Crawl Schedule Type was selected, in addition to scheduling a crawl to run according to the above settings.
Metadata
Name
Allows a custom name to be set for the workflow. If no name is set, the workflow's name will be set to the Crawl Start URL. For URL List crawls, the workflow's name will be set to the first URL present in the List of URLs field, with an added (+x)
where x
represents the total number of URLs in the list.
Description
Leave optional notes about the workflow's configuration.
Tags
Apply tags to the workflow. Tags applied to the workflow will propigate to every crawl created with it at the time of crawl creation.
Collection Auto-Add
Search for and specify collections that this crawl workflow should automatically add content to as soon as crawls finish running. Cancelled and Failed crawls will not be automatically added to collections.