hometown/README.md

2.4 KiB

Hometown

A supported fork of Mastodon that provides local posting and a wider range of content types.

This repository is a copy of coop-cloud/mastodon but with a fresh README and some Hometown specific configuration. It seems like a good idea to keep the deployment separate since the apps may diverge in their deployment or configuration instructions at some point despite best wishes to remain as mainline Mastodon as possible.

  • Category: Apps
  • Status: 1
  • Image: decentral1se/hometown
  • Healthcheck: No
  • Backups: No
  • Email: Yes
  • Tests: No
  • SSO: Yes

Basic usage

Hometown expects secrets to be formatted in a very specific way, so please choose "No" when prompted to generate secrets for abra app new mastodon. The secrets must be generated outside of abra and that is achieved in step 2. See the abra.sh for more.

  1. abra app new mastodon
  2. abra app cmd <domain> secrets --local
  3. abra app config <domain>
  4. abra app deploy <domain>
  5. abra app cmd <domain> setup

Then, on your host (outside of the containers), you'll need to fix permissions for the volume (see #2):

chown -R 991:991 /var/lib/docker/volumes/<domain>_app/_data

And finally, within the app container, create an admin account:

abra app cmd <domain> admin "<username>" "<email>"

Tips & Tricks

Auto-complete is not working?

Check the sidekiq logs (/sidekiq/retries), is a bunch of stuff failing? What is the error?

If it looks anything like blocked by: [FORBIDDEN/12/index read-only / allow delete (api)]; then it might mean that your elastic search service has put itself into "read-only" state. This could be due to running close to no free disk space one time. ES doesn't undo this state, even when you have more free disk space once more, so you need to handle this manually:

abra app run <domain> es bash
curl -XPUT -H "Content-Type: application/json" http://localhost:9200/_all/_settings -d '{"index.blocks.read_only_allow_delete": null}'

Then head back to the sidekiq retries panel and retry one job. You should see the ticket of retries go down by one if if passed. Then you can "retry all" and they should get scheduled & run.