A fork of MarginaliaSearch for Catgirl Intelligence Agency
Go to file
2023-08-25 13:05:21 +02:00
.github Update FUNDING.yml 2023-07-05 18:03:36 +02:00
code (control) Display progress of process tasks 2023-08-25 13:05:21 +02:00
doc (doc) Add control-service to conceptual overview 2023-08-20 13:28:32 +02:00
gradle/wrapper (index) Move index construction to separate process. 2023-08-25 12:52:54 +02:00
run (file-storage) New File Storage type for linkdb 2023-08-24 09:06:13 +02:00
third-party Upgrade code to Java 20. 2023-08-23 13:37:49 +00:00
tools Refactor website screenshot tool and website adjacencies calculator into code/tools. 2023-04-11 16:20:27 +02:00
.gitignore Restructuring the git repo 2023-03-04 13:19:01 +01:00
build.gradle (index) Move index construction to separate process. 2023-08-25 12:52:54 +02:00
CONTRIBUTING.md Update README and CONTRIBUTING. 2023-06-27 18:32:47 +02:00
docker-compose.yml Fix environment variables to processes so jmc works 2023-07-31 10:32:23 +02:00
docker-service.gradle (docker) Upgrade to jdk20 image to fix weird mojibake problems. 2023-08-19 10:58:47 +02:00
gradle.properties Restructuring the git repo 2023-03-04 13:19:01 +01:00
gradlew first commit 2022-05-19 17:45:26 +02:00
gradlew.bat Merge changes from experimental branch (#132) 2023-01-08 11:11:44 +01:00
LICENSE.md Update LICENSE.md 2023-03-20 16:49:07 +01:00
NGI0Entrust_tag.svg Update README to external reflect funding. 2023-06-27 18:20:55 +02:00
nlnet.png Update README to external reflect funding. 2023-06-27 18:20:55 +02:00
README.md Update README with version info 2023-06-30 11:49:17 +02:00
settings.gradle (index) Move index construction to separate process. 2023-08-25 12:52:54 +02:00

Marginalia Search

This is the source code for Marginalia Search.

The aim of the project is to develop new and alternative discovery methods for the Internet. It's an experimental workshop as much as it is a public service, the overarching goal is to elevate the more human, non-commercial sides of the Internet.

A side-goal is to do this without requiring datacenters and enterprise hardware budgets, to be able to run this operation on affordable hardware with minimal operational overhead.

The long term plan is to refine the search engine so that it provide enough public value that the project can be funded through grants, donations and commercial API licenses (non-commercial share-alike is always free).

Set up

Start by running ⚙️ run/setup.sh. This will download supplementary model data that is necessary to run the code. These are also necessary to run the tests.

To set up a local test environment, follow the instructions in 📄 run/readme.md!

Hardware Requirements

A production-like environment requires at least 128 Gb of RAM and ideally 2 Tb+ of enterprise grade SSD storage, as well as some additional terabytes of slower harddrives for storing crawl data. It can be made to run on smaller hardware by limiting size of the index.

A local developer's deployment is possible with much smaller hardware (and index size).

Project Structure

📁 code/ - The Source Code. See 📄 code/readme.md for a further breakdown of the structure and architecture.

📁 run/ - Scripts and files used to run the search engine locally

📁 third-party/ - Third party code

📁 doc/ - Supplementary documentation

📄 CONTRIBUTING.md - How to contribute

📄 LICENSE.md - License terms

Contact

You can email kontakt@marginalia.nu with any questions or feedback.

License

The bulk of the project is available with AGPL 3.0, with exceptions. Some parts are co-licensed under MIT, third party code may have different licenses. See the appropriate readme.md / license.md.

Versioning

The project uses modified Calendar Versioning, where the first two pairs of numbers are a year and month coinciding with the latest crawling operation, and the third number is a patch number.

            version
           --
     yy.mm.VV
     -----
     crawl

For example, 23.03.02 is a release with crawl data from March 2023 (released in May 2023). It is the second patch for the 23.02 release.

Versions with the same year and month are compatible with each other, or offer an upgrade path where the same data set can be used, but across different crawl sets data format changes may be introduced, and you're generally expected to re-crawl the data from scratch as crawler data has shelf life approximately as long as the major release cycles of this project. After about 2-3 months it gets noticeably stale with many dead links.

For development purposes, crawling is discouraged and sample data is available. See 📄 run/readme.md for more information.

Funding

Donations

Consider donating to the project.

Grants

This project was funded through the NGI0 Entrust Fund, a fund established by NLnet with financial support from the European Commission's Next Generation Internet programme, under the aegis of DG Communications Networks, Content and Technology under grant agreement No 101069594.

NLnet Foundation NGI0