440e097d78
This commit is in a pretty rough state. It refactors the crawler fairly significantly to offer better separation of concerns. It replaces the zstd compressed json files used to store crawl data with WARC files entirely, and the converter is modified to be able to consume this data. This works, -ish. There appears to be some bug relating to reading robots.txt, and the X-Robots-Tag header is no longer processed either. A problem is that the WARC files are a bit too large. It will probably be likely to introduce a new format to store the crawl data long term, something like parquet; and use WARCs for intermediate storage to enable the crawler to be restarted without needing a recrawl. |
||
---|---|---|
.. | ||
assistant-service | ||
control-service | ||
executor-service | ||
index-service | ||
query-service | ||
readme.md |
Core Services
The cores services constitute the main functionality of the search engine, relatively agnostic to the Marginalia application.
-
The index-service contains the indexes, it answers questions about which documents contain which terms.
-
The query-service Interprets queries and delegates work to index-service.
-
The control-service provides an operator's user interface, and is responsible for orchestrating the various processes of the system.
-
The assistant-service helps the search service with spelling suggestions other peripheral functionality.