backup -- create: Fatal: wrong password or no key found #36
Labels
No Label
bug
duplicate
enhancement
help wanted
invalid
question
wontfix
No Milestone
No project
No Assignees
2 Participants
Notifications
Due Date
No due date set.
Dependencies
No dependencies set.
Reference: coop-cloud/backup-bot-two#36
Loading…
Reference in New Issue
No description provided.
Delete Branch "%!s(<nil>)"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
From a freshly deployed
backup-bot-two
which I ranapp secret generate -a
on.Once I rm'd and re-create this, it worked! Issue on my end, I guess.
Would it an idea to try and drop the huge stack traces? Is there anyway to just pass back the
restic.errors.ResticFailedError: Restic failed with exit code 1: Fatal: wrong password or no key
part and not the whole thing?Then I ran the create:
What should I output on the
abra
side after parsing this? What is good to include? I think it would be good to know which paths, at least? I can read the volumes from the recipe config or?For debugging the whole stacktrace is quite important. Do you know how to reproduce this? If it's a typical user input error we could catch it, and provide a better error message how to solve it.
I think
files_new
,files_changed
,data_added
,total_duration
andsnapshot_id
could be useful as output.Not sure what you mean?
RIght yeh, maybe I could pass
--debug
fromabra
to-e DEBUG=true
to thebackup-bot-two
? I think it's better to have less chaotic stacktraces as the default. Maybe fine for now though, no major issue. Not sure how to reproduce this no, sorry.From the output of
abra app backup create abra-test-recipe.foo.com
I don't know how these stats connect to which volume is defined in the recipe config. If I show afiles_new
which volume is that related to? Is there a way to know this?restic
is doing the backup over all volumes at once and I just forward the output ofrestic
. Unfortunately I don't see a way to differentiate the stats per volume.We could run a single backup for each volume, but I think this would increase the overhead, also the volumes would be split over multiple snapshots. That wouldn't be ideal.
Moved on, unsure what is actionable here, let's close.