Support metadata or full data backup + documentation
This commit is contained in:
@ -20,3 +20,6 @@ BLOCK_SIZE=1MiB # only increase if there is a fast network connection between no
|
|||||||
# Use a directory on the host instead of a docker volume for storage
|
# Use a directory on the host instead of a docker volume for storage
|
||||||
#LOCAL_FOLDER_META=/path/on/docker/host
|
#LOCAL_FOLDER_META=/path/on/docker/host
|
||||||
#LOCAL_FOLDER_DATA=/path/on/docker/host
|
#LOCAL_FOLDER_DATA=/path/on/docker/host
|
||||||
|
|
||||||
|
## Enable Full Data Backups (not just metadata)
|
||||||
|
# COMPOSE_FILE="$COMPOSE_FILE:compose.fullbackup.yml"
|
||||||
41
README.md
41
README.md
@ -18,39 +18,44 @@
|
|||||||
## Quick start
|
## Quick start
|
||||||
|
|
||||||
* `abra app new garage`
|
* `abra app new garage`
|
||||||
* Garage is particular about the rpc secret, generate it locally with `openssl rand -hex 32` then insert the result
|
* If you don't already have an RPC secret for your Garage cluster, generate one: `abra app secret generate --all`
|
||||||
* `abra app secret i <app-domain> rpc_secret v1 <rpc-secret>`
|
* Note: In older versions of abra you must generate the secret locally with `openssl rand -hex 32` then insert the result as described below
|
||||||
|
* If this Garage node is joining a cluster with an existing RPC secret, insert it: `abra app secret insert <app-domain> rpc_secret v1 <rpc-secret>`
|
||||||
> Note: all nodes must share the same rpc secret, do not lose this value if you plan to cluster garage!
|
> Note: all nodes must share the same rpc secret, do not lose this value if you plan to cluster garage!
|
||||||
* `abra app config <app-domain>`
|
* `abra app config <app-domain>`
|
||||||
* `abra app deploy <app-domain>`
|
* `abra app deploy <app-domain>`
|
||||||
|
|
||||||
## Peering
|
## Configuration
|
||||||
|
|
||||||
#### Garage CLI
|
### Allow RPC Connections
|
||||||
|
|
||||||
|
* Your ingress controller must be set up to allow connections on port 3901. We assume you're using Traefik
|
||||||
|
* `abra app configure <traefik-app-name>`
|
||||||
|
* Uncomment the block that starts with `## Garage`
|
||||||
|
* Re-deploy Traefik: `abra app undeploy -n <traefik-app-name> && sleep 5 && abra app deploy -n <traefik-app-name>`
|
||||||
|
|
||||||
|
### Prepare the Garage Client
|
||||||
Start by creating an alias for the abra run command
|
Start by creating an alias for the abra run command
|
||||||
```
|
```
|
||||||
alias garage="abra app run <app domain> -- app /garage"
|
alias garage="abra app run <app domain> -- app /garage"
|
||||||
```
|
```
|
||||||
Run `garage status` to verify everything is working
|
Run `garage status` to verify everything is working
|
||||||
|
|
||||||
#### Assign Roles
|
### Garage Quick Start Guide
|
||||||
|
Once `garage status` works, you can follow the guide here: https://garagehq.deuxfleurs.fr/documentation/quick-start/#checking-that-garage-runs-correctly
|
||||||
Terms:
|
|
||||||
* `node id` (reqired) - Node identifier supplied by the garage CLI, can be found by running `garage node id`.
|
|
||||||
* `zone` (reqired) - Identifier for how nodes will be grouped, a zone usually refers to a geographical location (us-east, paris-1, etc.) no specific syntax is required, zones can be called anything.
|
|
||||||
* `capacity` (reqired) - Disk space the node will be allocating to the cluster, use T and G for units (Terabytes and Gigabytes respectively).
|
|
||||||
* `tag` (optional) - Additional notes appended to garage status, usually a title for the node.
|
|
||||||
|
|
||||||
> Role assignment command conflicts with `abra app run`'s -t option\
|
|
||||||
> Connecting not currently implemented
|
|
||||||
|
|
||||||
## Backups
|
## Backups
|
||||||
|
|
||||||
> Not currently implemented
|
> In development, not currently reliable
|
||||||
|
|
||||||
Backups will only capture a snapshot of the metadata directory, which includes bucket names, hashed secrets, and other related information. However, they do not include the actual data!
|
By default, backups will only capture a snapshot of the metadata directory, which includes bucket names, hashed secrets, and other related information.
|
||||||
|
By default, the actual data will not be backed up!
|
||||||
|
If you're running Garage in a cluster, when you restore the metadata, other nodes will send the new node any missing data.
|
||||||
|
|
||||||
|
### To enable full data backups
|
||||||
|
* `abra app config <app domain>`
|
||||||
|
* Uncomment the block that starts with `## Enable Full Data Backups`
|
||||||
|
* Re-deploy Garaga: `abra app undeploy -n <app domain> && sleep 5 && abra app deploy -n <app domain>`
|
||||||
|
|
||||||
If you're running Garage in a cluster, when you restore the metadata, other nodes will send the new node any missing data.\
|
|
||||||
Finally, please note that Abra backups are not a substitute for a proper data replication strategy, and it's recommended to run Garage in a cluster if you need data redundancy.
|
|
||||||
|
|
||||||
For more, see [`garagehq.deuxfleurs.fr`](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/).
|
For more, see [`garagehq.deuxfleurs.fr`](https://garagehq.deuxfleurs.fr/documentation/cookbook/real-world/).
|
||||||
|
|||||||
9
compose-fullbackup.yml
Normal file
9
compose-fullbackup.yml
Normal file
@ -0,0 +1,9 @@
|
|||||||
|
---
|
||||||
|
version: "3.8"
|
||||||
|
|
||||||
|
services:
|
||||||
|
app:
|
||||||
|
deploy:
|
||||||
|
labels:
|
||||||
|
- "backupbot.backup=true"
|
||||||
|
- "backupbot.backup.path=/var/lib/garage/meta,/var/lib/garage/data"
|
||||||
Reference in New Issue
Block a user