CatgirlIntelligenceAgency/code/processes/converting-process
Viktor Lofgren fab36d6e63 (converter) Loader for reddit data
Adds experimental sideloading support for pusshift.io style reddit data.  This dataset is limited to data older than 2023, due to licensing changes making large-scale data extraction difficult.

Since the median post quality on reddit is not very good, he sideloader will only load a subset of self-texts and top-level comments that have sufficiently many upvotes.  Empirically this appears to mostly return good matches, even if it probably could index more.

Tests were written for this, but all require local reddit data which can't be distributed with the source code.  If these can not be found, the tests will shortcircuit as OK.  They're mostly there for debugging, and it's fine if they don't always run.

The change also refactors the sideloading a bit since it was a bit messy.
2024-02-14 17:35:44 +01:00
..
src (converter) Loader for reddit data 2024-02-14 17:35:44 +01:00
build.gradle (converter) Loader for reddit data 2024-02-14 17:35:44 +01:00
readme.md (doc) Update docs 2024-02-06 16:29:55 +01:00

Converting Process

The converting process reads crawl data and extracts information to be fed into the index, such as keywords, metadata, urls, descriptions...

The converter reads crawl data in the form of parquet files, and writes the extracted data to parquet files on a different format. These files are then passed to the loader process, which does additional processing needed to feed the data into the index.

The reason for splitting the process into two parts is that the heavier converting process can be terminated and restarted without losing progress, while the lighter loader process needs to be run in a single go (or restarted if it crashes/terminates).

The converter output is also in general more portable and can be used for different tasks, meanwhile the loader's output is heavily tailored to the index and not much use for anything else.

Structure

Most information is extracted from the document itself within DocumentProcessor, but some information is extracted from the context of the document, such as other documents on the same domain. This is done in DomainProcessor.

To support multiple document formats, the converting process is pluggable. Each plugin is responsible for converting a single document format, such as HTML or plain text.

Further, the HTML plugin supports specializations, which refine the conversion process for specific server software, such as Javadoc, MediaWiki, PhpBB, etc. This helps to improve the processing for common types of websites, and makes up for the fact that it's hard to build a one-size-fits-all heuristic for deciding which parts of a document are important that does justice to every website.

Anchor Text

The converting process also supports supplementing the data with external information, such as anchor texts. This is done automatically if atags.parquet is available in the data/-directory. atags.parquet can be downloaded from here.

The rationale for doing this as well as the details of how the file is generated is described in this blog post: https://www.marginalia.nu/log/93_atags/

Central Classes

See Also