Installation (running your own crawlers)¶
- Make a copy of the
- Add your own crawler YAML configurations into the
- Add your Python extensions into the
srcdirectory (if applicable).
setup.pywith the name of your project and any additional dependencies.
- If you need to (eg. if your database connection or directory structure is different), update any environment variables in the
docker-compose.yml, although the defaults should work fine.
docker-compose up -d. This might take a while when it’s building for the first time.
Run a crawler¶
- You can access the Memorious CLI through the
docker-compose run --rm worker /bin/sh
To see the crawlers available to you:
And to run a crawler:
memorious run my_crawler
See Usage (or run
memorious --help) for the complete list of Memorious commands.
Check worker logs¶
- You can check the worker logs:
docker-compose logs worker -f
Access Memorious UI¶
- The Memorious UI is available at http://localhost:8000/
Note: you can use any directory structure you like,
config are not required, and nor is separation of YAML and Python files. So long as the
MEMORIOUS_CONFIG_PATH environment variable points to a directory containing, within any level of directory nesting, your YAML files, Memorious will find them.
Your Memorious instance is configured by a set of environment variables that control database connectivity and general principles of how the system operates. You can set all of these in the
MEMORIOUS_CONFIG_PATH: a path to crawler pipeline YAML configurations.
MEMORIOUS_DEBUG: whether to go into a simple mode with task threading disabled. Defaults to
MEMORIOUS_INCREMENTAL: executing part of a crawler only once per an interval. Defaults to
MEMORIOUS_EXPIRE: how many days until cached crawled data expires. Defaults to 1 day.
MEMORIOUS_DB_RATE_LIMIT: maximum number of database inserts per minute. Defaults to 6000.
MEMORIOUS_HTTP_RATE_LIMIT: maximum number of http calls to a host per minute. Defaults to 120.
MEMORIOUS_SCHEDULER_INTERVAL: how many seconds to wait in-between before checking whether any scheduled crawlers are available to run. Defaults to 60.
MEMORIOUS_HTTP_CACHE: HTTP request configuration.
MEMORIOUS_USER_AGENT: Custom User-Agent string for Memorious.
MEMORIOUS_DATASTORE_URI: connection path for an operational database (which crawlers can send data to using the
dbmethod). Defaults to a local
MEMORIOUS_MAX_SCHEDULED: maximum number of scheduled crawlers to run at the same time.
WORKER_THREADS: how many threads to use for execution.
REDIS_URL: address of Redis instance to use for crawler logs (uses a temporary FakeRedis if missing).
file(local file system is used for storage) or
s3(Amazon S3 is used) or
gs(Google Cloud Storage is used).
ARCHIVE_PATH: local directory to use for storage if
ARCHIVE_BUCKET: bucket name if
AWS_ACCESS_KEY_ID: AWS Access Key ID. (Only needed if
AWS_SECRET_ACCESS_KEY: AWS Secret Access Key. (Only needed if
AWS_REGION: a regional AWS endpoint. (Only needed if
ALEPH_HOST, default is
https://data.occrp.org/, but any instance of Aleph 2.0 or greater should work.
ALEPH_API_KEY, a valid API key for use by the upload operation.
Shut it down¶
To gracefully exit, run
Files which were downloaded by crawlers you ran, Memorious progress data from the Redis database, and the Redis task queue, are all persisted in the
build directory, and will be reused next time you start it up. (If you need a completely fresh start, you can delete this directory).
Building a crawler¶
Crawler Development mode¶
When you’re working on your crawlers, it’s not convenient to rebuild your Docker containers all the time. To run without Docker:
- Copy the environment variables from the
env.sh. Make sure
MEMORIOUS_CONFIG_PATHpoints to your crawler YAML files, wherever they may be.
pip install memorious. If your crawlers use Python extensions, you’ll need to run
pip installin your crawlers directory as well
memorious listto list your crawlers and
memorious run your-crawlerto run a crawler.
Note: In development mode Memorious uses a single threaded worker (because FakeRedis is single threaded). So task execution concurrency is limited and the worker executes stages in a crawler’s pipeline linearly one after another.