CatgirlIntelligenceAgency/code
Viktor Lofgren c73e43f5c9 (recrawl) Mitigate recrawl-before-load footgun
In the scenario where an operator

* Performs a new crawl from spec
* Doesn't load the data into the index
* Recrawls the data

The recrawl will not find the domains in the database, and the crawl log will be overwritten with an empty file,
irrecoverably losing the crawl log making it impossible to load!

To mitigate the impact similar problems, the change saves a backup of the old crawl log, as well as complains about this happening.

More specifically to this exact scenario however, the parquet-loaded domains are also preemptively inserted into the domain database at the start of the crawl.  This should help the DbCrawlSpecProvider to find them regardless of loaded state.

This may seem a bit redundant, but losing crawl data is arguably the worst type of disaster scenario for this software, so it's arguably merited.
2024-02-18 09:23:20 +01:00
..
api (client) Refactor GrpcStubPool to handle error states 2024-02-17 14:42:26 +01:00
common (blacklist) Delay startup of blacklist 2024-02-18 09:23:20 +01:00
features-convert (converter) Loader for reddit data 2024-02-14 17:35:44 +01:00
features-crawl (doc) Update docs 2024-02-06 16:29:55 +01:00
features-index (index-query) Add some tests for the QueryFilter code 2024-02-15 12:03:30 +01:00
features-qs (search/index) Add a new keyword "count" 2023-12-25 20:38:29 +01:00
features-search (*) Replace EC_DOMAIN_LINK table with files and in-memory caching 2024-01-08 15:53:13 +01:00
libraries (index-query) Add some tests for the QueryFilter code 2024-02-15 12:03:30 +01:00
process-models (process-models) Improve documentation 2024-02-15 12:21:12 +01:00
processes (recrawl) Mitigate recrawl-before-load footgun 2024-02-18 09:23:20 +01:00
services-application (search) Temporarily disable the Popular filter 2024-02-18 08:02:01 +01:00
services-core (recrawl) Mitigate recrawl-before-load footgun 2024-02-18 09:23:20 +01:00
tools (doc) Update docs 2024-02-06 12:41:28 +01:00
readme.md (doc) Update docs 2024-02-06 12:41:28 +01:00

Code

This is a pretty large and diverse project with many moving parts.

You'll find a short description in each module of what it does and how it relates to other modules. The modules each have names like "library" or "process" or "feature". These have specific meanings. See doc/module-taxonomy.md.

Overview

A map of the most important components and how they relate can be found below.

image

The core part of the search engine is the index service, which is responsible for storing and retrieving the document data. The index serive is partitioned, along with the executor service, which is responsible for executing processes. At least one instance of each service must be run, but more can be run alongside. Multiple partitions is desirable in production to distribute load across multiple physical drives, as well as reducing the impact of downtime.

Search queries are delegated via the query service, which is a proxy that fans out the query to all eligible index services. The control service is responsible for distributing commands to the executor service, and for monitoring the health of the system. It also offers a web interface for operating the system.

Services

Processes

Processes are batch jobs that deal with data retrieval, processing and loading. These are spawned and orchestrated by the executor service, which is controlled by the control service.

Features

Features are relatively stand-alone components that serve some part of the domain. They aren't domain-independent, but isolated.

Libraries and primitives

Libraries are stand-alone code that is independent of the domain logic.

  • common elements for creating a service, a client etc.
  • libraries containing non-search specific code.
    • array - large memory mapped area library
    • btree - static btree library