CatgirlIntelligenceAgency/code/processes/crawling-process
2023-12-31 16:59:08 +01:00
..
src (crawler) Disable Java's infinite DNS caching 2023-12-31 16:59:08 +01:00
build.gradle (crawler) Switch hash function in crawler 2023-12-27 13:29:00 +01:00
readme.md (docs) Improve architectural documentation for the crawler. 2023-11-30 21:30:57 +01:00

Crawling Process

The crawling process downloads HTML and saves them into per-domain snapshots. The crawler seeks out HTML documents, and ignores other types of documents, such as PDFs. Crawling is done on a domain-by-domain basis, and the crawler does not follow links to other domains within a single job.

Robots Rules

A significant part of the crawler is dealing with robots.txt and similar, rate limiting headers; especially when these are not served in a standard way (which is very common). RFC9390 as well as Google's Robots.txt Specifications are good references.

Re-crawling

The crawler can use old crawl data to avoid re-downloading documents that have not changed. This is done by comparing the old and new documents using the HTTP If-Modified-Since and If-None-Match headers. If a large proportion of the documents have not changed, the crawler falls into a mode where it only randomly samples a few documents from each domain, to avoid wasting time and resources on domains that have not changed.

Sitemaps and rss-feeds

On top of organic links, the crawler can use sitemaps and rss-feeds to discover new documents.

Central Classes

See Also