yet another try on the monitoring stack
Go to file
Philipp Rothmann cc8a0b2905 wip loki stuff 2023-02-13 16:10:33 +01:00
.env.sample wip loki stuff 2023-02-13 16:10:33 +01:00
.gitignore init 2022-03-31 14:52:21 +02:00
README.md wip loki stuff 2023-02-13 16:10:33 +01:00
abra.sh wip loki stuff 2023-02-13 16:10:33 +01:00
alertmanager.yml.tmpl init 2022-03-31 14:52:21 +02:00
compose.grafana.yml wip loki stuff 2023-02-12 19:06:30 +01:00
compose.loki.yml wip loki stuff 2023-02-13 16:10:33 +01:00
compose.metrics.yml wip loki stuff 2023-02-13 16:10:33 +01:00
compose.prometheus.yml wip loki stuff 2023-02-13 16:10:33 +01:00
compose.promtail.yml wip loki stuff 2023-02-13 16:10:33 +01:00
compose.yml wip loki stuff 2023-02-13 16:10:33 +01:00
grafana-dashboards.yml add grafana 2023-02-11 17:17:50 +01:00
grafana-datasources.yml wip loki stuff 2023-02-13 16:10:33 +01:00
grafana-stacks-dashboard.json add grafana 2023-02-11 17:17:50 +01:00
grafana-swarm-dashboard.json add grafana 2023-02-11 17:17:50 +01:00
grafana-traefik-dashboard.json add grafana 2023-02-11 17:17:50 +01:00
grafana_custom.ini add grafana 2023-02-11 17:17:50 +01:00
loki.yml.tmpl wip loki stuff 2023-02-12 19:06:30 +01:00
node-exporter-entrypoint.sh wip 2023-02-09 10:07:29 +01:00
prometheus.yml.tmpl wip loki stuff 2023-02-13 16:10:33 +01:00
promtail.yml.tmpl wip loki stuff 2023-02-13 16:10:33 +01:00
scrape-config.example.yml add prometheus 2023-02-11 16:36:26 +01:00

README.md

monitoring-lite

A centralised grafana/prometheus/loki stack. This an alternative approach to coop-cloud/monitoring which does include any of the services which actually gather metrics and/or logs. Instead, this is a useful recipe for folks who need to centralise their monitoring stack into a single grafana/prometheus/loki & several instances of node_exporter/cadvisor/promtail.

  • Category: Apps
  • Status: 2, beta
  • Image: grafana/grafana, 4, upstream
  • Healthcheck: 3
  • Backups: 1
  • Email: 3
  • Tests: No
  • SSO: 1

Setup a Metrics Gathering

Where gathering.org is the node you want to gather metrics from.

  1. Configure DNS
  • monitoring.gathering.org
  • cadvisor.monitoring.gathering.org
  • node.monitoring.gathering.org
  1. Configure Traefik to use BasicAuth
  • abra app config traefik.gathering.org uncomment
    # BASIC_AUTH
    COMPOSE_FILE="$COMPOSE_FILE:compose.basicauth.yml"
    BASIC_AUTH=1
    SECRET_USERSFILE_VERSION=v1
    
  • Generate userslist with httpasswd hashed password abra app secret insert traefik.gathering.org userslist v1 'admin:hashed-secret' make sure there is no whitespace between admin:hashed-secret, it seems to break stuff...
  • abra app deploy traefik (might need to undeploy before)
  1. abra app new monitoring-ng
  2. abra app config monitoring.gathering.org for gathering only this is required: COMPOSE_FILE="$COMPOSE_FILE:compose.metrics.yml"
  3. abra app deploy monitoring.gathering.org
  4. check that endpoints are up and basic-auth works
  • cadvisor.monitoring.gathering.org
  • node.monitoring.gathering.org

Setup Metrics Browser

  1. Configure DNS
    • monitoring.example.org
    • loki.monitoring.example.org
    • loki.monitoring.example.org
cp scrape-config.example.yml gathering.org.yml
# adjust domain
# mkdir scrape_configs
abra app cp monitoring.dev.local-it.cloud gathering.org.yml prometheus:/prometheus/scrape_configs/

  1. Insert secrets for prometheus
  2. add scrape config (see example) and run abra app cp to copy it
  3. grafana sso secret
Grafana Email / SSO monitoring.example.org
Prometheus traefik basic-auth prometheus.monitoring.example.org
loki traefik basic-auth loki.monitoring.example.org
Cadvisor traefik basic-auth cadvisor.monitoring.example.org
Node Exporter traefik basic-auth node.monitoring.example.org

TODO

  • metrics.compose.yml -> compose.yml
  • Loki
    • s3 aws secret?
  • Promtail
  • Loki -> Grafana Datasource
  • prometheus retention!
  • traefik metrics
  • uptime-kuma, dashboard
  • authentik metrics?
  • cool alerts
  • note: alle gathering nodes will have the same httpasswd basic-auth secret ... -> this could be a use case to actually use docker swarm ... could use swarm_service_discovery then in prometheus -> multiple scrape_configs in prometheus service -> oauth / header? prometheus could do it, does promtail? does traefik?

This stack requires 3 domains, one for grafana, prometheus, loki. This is due to the need for the gathering tools, such as node_exporter, to have a publicy accessible URL for making connections. We make use of the internal prometheus HTTP basic auth & wire up an Nginx proxy with HTTP basic auth for loki. Grafana uses Keycloak OpenId Connect sign in. The alertmanager setup remains internal and is only connected with grafana. It also assume that you are deploying the coop-cloud/gathering recipe on the machines that you want to gather metrics & logs from. Each instance of the gathering recipe will report back and/or be scraped by your central install of monitoring-lite.

Post-setup guide

  • configure prometheus/loki/alertmanager as data sources in grafana under Configuration > Data sources

    • for loki, you need to set a "Custom HTTP Header": X-Scope-OrgID: fake
  • configure the SMTP mailer under Alerting > Contact points

    • edit the default contact point, choose "Alertmanager" as type & http://alertmanager:9093 as URL
    • use the "Test" button to send a test mail. It should fire a request at the alertmanager & that should send a mail
  • abra app cp your scrap_configs: ... into /prometheus/scrape_configs & log into your prometheus web UI to ensure they're working

  • load your dashboards in manually under Create > Dashboard

  • from your dashboard panels, choose Edit > Alert to create alerts based on those panels

THX to the previous work of @decentral1se @knooflok @3wc @cellarspoon @mirsal