2023-03-04 16:42:31 +01:00
# Crawling Process
2023-11-30 21:30:57 +01:00
The crawling process downloads HTML and saves them into per-domain snapshots. The crawler seeks out HTML documents,
and ignores other types of documents, such as PDFs. Crawling is done on a domain-by-domain basis, and the crawler
does not follow links to other domains within a single job.
2024-02-01 18:10:55 +01:00
The crawler stores data from crawls in-progress in a WARC file. Once the crawl is complete, the WARC file is
converted to a parquet file, which is then used by the [converting process ](../converting-process/ ). The intermediate
WARC file is not used by any other process, but kept to be able to recover the state of a crawl in case of a crash or
other failure.
If configured so, these crawls may be retained. This is not the default behavior, as the WARC format is not very dense,
and the parquet files are much more efficient. However, the WARC files are useful for debugging and integration with
other tools.
2023-11-30 21:30:57 +01:00
## Robots Rules
A significant part of the crawler is dealing with `robots.txt` and similar, rate limiting headers; especially when these
2024-02-01 18:10:55 +01:00
are not served in a standard way (which is very common). [RFC9390 ](https://www.rfc-editor.org/rfc/rfc9309.html ) as well as Google's [Robots.txt Specifications ](https://developers.google.com/search/docs/advanced/robots/robots_txt ) are good references.
2023-11-30 21:30:57 +01:00
## Re-crawling
The crawler can use old crawl data to avoid re-downloading documents that have not changed. This is done by
comparing the old and new documents using the HTTP `If-Modified-Since` and `If-None-Match` headers. If a large
proportion of the documents have not changed, the crawler falls into a mode where it only randomly samples a few
documents from each domain, to avoid wasting time and resources on domains that have not changed.
## Sitemaps and rss-feeds
On top of organic links, the crawler can use sitemaps and rss-feeds to discover new documents.
2023-03-04 16:42:31 +01:00
## Central Classes
* [CrawlerMain ](src/main/java/nu/marginalia/crawl/CrawlerMain.java ) orchestrates the crawling.
* [CrawlerRetreiver ](src/main/java/nu/marginalia/crawl/retreival/CrawlerRetreiver.java )
visits known addresses from a domain and downloads each document.
2023-10-09 15:12:30 +02:00
* [HttpFetcher ](src/main/java/nu/marginalia/crawl/retreival/fetcher/HttpFetcherImpl.java )
2023-11-30 21:30:57 +01:00
fetches URLs.
2023-03-13 17:39:53 +01:00
## See Also
2023-11-30 21:30:57 +01:00
* [features-crawl ](../../features-crawl/ )