garage
An open-source distributed object storage service tailored for selfhosting at a small-to-medium scale.
- Category: Apps
- Status: wip
- Image:
garage
, 4, upstream - Healthcheck: No
- Backups: No
- Email: No
- Tests: No
- SSO: No
Quick start
abra app new garage
- Garage is particular about the rpc secret, generate it locally with this command then insert it
openssl rand -hex 32
abra app secret i <app-domain> rpc_secret v1 <rpc-secret>
Note: all nodes must share the same rpc secret, do not lose this value if you plan to cluster garage!
abra app config <app-domain>
abra app deploy <app-domain>
Getting started
Garage CLI
Start by creating an alias for the abra run command
alias garage="abra app run <app-domain> app /garage"
Run garage status
to verify everything is working
Assign Roles
Terms:
node id
(reqired) - Node identifier supplied by the garage CLI, can be found by runninggarage node id
.zone
(reqired) - Identifier for how nodes will be grouped, a zone usually refers to a geographical location (us-east, paris-1, etc.) no specific syntax is required, zones can be called anything.capacity
(reqired) - Disk space the node will be allocating to the cluster, use T and G for units (Terabytes and Gigabytes respectively).tag
(optional) - Additional notes appended to garage status, usually a title for the node.
command layout:
garage layout assign <node-id> -z <zone> -c <capacity> -t <tags>
Example (pulled from garage docs)
garage layout assign 563e -z par1 -c 1T -t mercury
garage layout assign 86f0 -z par1 -c 2T -t venus
garage layout assign 6814 -z lon1 -c 2T -t earth
garage layout assign 212f -z bru1 -c 1.5T -t mars
Adding & Connecting Nodes
Connecting not currently implemented
This abra recipe does not supply garage with the public ip address of your box, so when connecting to another garage node you must supply the local node's ip.
Example, run from node 2:
garage node connect <node-1-id>@<node-1-ip>:3901
Example, run from node 1:
garage -h <node-2-ip> node connect <node-1-id>@<node-1-ip>:3901
Backups
Not currently implemented
Backups will only capture a snapshot of the metadata directory, which includes bucket names, hashed secrets, and other related information. However, they do not include the actual data!
If you're running Garage in a cluster, when you restore the metadata, other nodes will send the new node any missing data.
Finally, please note that Abra backups are not a substitute for a proper data replication strategy, and it's recommended to run Garage in a cluster if you need data redundancy.
For more, see garagehq.deuxfleurs.fr
.