Why are most of my backup snapshots not "restorable"?

Hi folks,

I’m trying to get my head around how backup/restore works. I’ve been making non-continuous backups every hour to a blob store, but when I describe these backups, only one of them shows as being restorable=true.

For example:

Partitioned logs: false
Snapshot:  startVersion=1515130313 (maxLogEnd -0.05 days)  endVersion=1515275376 (maxLogEnd -0.05 days)  totalBytes=8046115  restorable=true  expiredPct=0.00
Snapshot:  startVersion=2663742809 (maxLogEnd -0.03 days)  endVersion=2663919783 (maxLogEnd -0.03 days)  totalBytes=8046115  restorable=false  expiredPct=0.00
Snapshot:  startVersion=3246201819 (maxLogEnd -0.03 days)  endVersion=3246322110 (maxLogEnd -0.03 days)  totalBytes=8046115  restorable=false  expiredPct=0.00
Snapshot:  startVersion=3868135355 (maxLogEnd -0.02 days)  endVersion=3868322202 (maxLogEnd -0.02 days)  totalBytes=8046115  restorable=false  expiredPct=0.00
Snapshot:  startVersion=5432053553 (maxLogEnd -0.00 days)  endVersion=5432233473 (maxLogEnd -0.00 days)  totalBytes=8090186  restorable=false  expiredPct=0.00
SnapshotBytes: 40274646
MinLogBeginVersion:      1515029301 (maxLogEnd -0.05 days)
ContiguousLogEndVersion: 1535029301 (maxLogEnd -0.05 days)
MaxLogEndVersion:        5451957755 (maxLogEnd -0.00 days)
MinRestorableVersion:    1515275376 (maxLogEnd -0.05 days)
MaxRestorableVersion:    1535029300 (maxLogEnd -0.05 days)

Each snapshot was create successively by running fdb backup start -w

Does this output (specifically the MaxRestorableVersion value) mean that I can only restore data from 0.05 days ago, even though I successfully ran backups 0.00 days ago?

Thanks!
D

1 Like

Your output is from describing a successful backup, right?

For your unrestorable backups, did you try running describe with --deep ? I found recently that if you run describe before the backup is complete, it caches the metadata (including that is is incomplete and unrestorable) and still continues to report that if you run describe after the backup completes: Backups not "restorable" after 6.3 upgrade - #2 by amanda

1 Like

Thanks for the reply @amanda :slight_smile:

I believe the backups I’m describing are successful - each one terminates with Submitted and now waiting for the backup on tag hourly’ to complete.` and an exit code of 0.

I have tried running the describe command with --deep, but that doesn’t alter the output, in my case.

Here’s a particularly hard-to-understand example:

Restorable: true
Partitioned logs: false
Snapshot:  startVersion=4468767014 (maxLogEnd -13.48 days)  endVersion=4468909129 (maxLogEnd -13.48 days)  totalBytes=8046115  restorable=false  expiredPct=0.00
Snapshot:  startVersion=90867210991 (maxLogEnd -12.48 days)  endVersion=90867365452 (maxLogEnd -12.48 days)  totalBytes=8090186  restorable=false  expiredPct=0.00
Snapshot:  startVersion=177266876034 (maxLogEnd -11.48 days)  endVersion=177267025078 (maxLogEnd -11.48 days)  totalBytes=8090186  restorable=false  expiredPct=0.00
Snapshot:  startVersion=263666804747 (maxLogEnd -10.48 days)  endVersion=263666994356 (maxLogEnd -10.48 days)  totalBytes=8090186  restorable=false  expiredPct=0.00
Snapshot:  startVersion=350066142746 (maxLogEnd -9.48 days)  endVersion=350066234841 (maxLogEnd -9.48 days)  totalBytes=8090186  restorable=false  expiredPct=0.00
Snapshot:  startVersion=436466713300 (maxLogEnd -8.48 days)  endVersion=436466865175 (maxLogEnd -8.48 days)  totalBytes=8090186  restorable=false  expiredPct=0.00
Snapshot:  startVersion=522866843097 (maxLogEnd -7.48 days)  endVersion=522866992395 (maxLogEnd -7.48 days)  totalBytes=8090184  restorable=false  expiredPct=0.00
Snapshot:  startVersion=579035571782 (maxLogEnd -6.83 days)  endVersion=579035725950 (maxLogEnd -6.83 days)  totalBytes=8046445  restorable=true  expiredPct=0.00
Snapshot:  startVersion=677428989698 (maxLogEnd -5.69 days)  endVersion=677429145026 (maxLogEnd -5.69 days)  totalBytes=8046445  restorable=false  expiredPct=0.00
Snapshot:  startVersion=775706995886 (maxLogEnd -4.55 days)  endVersion=775707157130 (maxLogEnd -4.55 days)  totalBytes=8046445  restorable=false  expiredPct=0.00
Snapshot:  startVersion=874122941249 (maxLogEnd -3.41 days)  endVersion=874123126886 (maxLogEnd -3.41 days)  totalBytes=8046445  restorable=false  expiredPct=0.00
Snapshot:  startVersion=972407903215 (maxLogEnd -2.28 days)  endVersion=972408076382 (maxLogEnd -2.28 days)  totalBytes=8046445  restorable=false  expiredPct=0.00
Snapshot:  startVersion=1070738028407 (maxLogEnd -1.14 days)  endVersion=1070738189282 (maxLogEnd -1.14 days)  totalBytes=8046445  restorable=false  expiredPct=0.00
Snapshot:  startVersion=1168992216442 (maxLogEnd -0.00 days)  endVersion=1168992380018 (maxLogEnd -0.00 days)  totalBytes=8046443  restorable=false  expiredPct=0.00
SnapshotBytes: 112912342
MinLogBeginVersion:      579035479545 (maxLogEnd -6.83 days)
ContiguousLogEndVersion: 579055479545 (maxLogEnd -6.83 days)
MaxLogEndVersion:        1169012144667 (maxLogEnd -0.00 days)
MinRestorableVersion:    579035725950 (maxLogEnd -6.83 days)
MaxRestorableVersion:    579055479544 (maxLogEnd -6.83 days)

My questions re the above are:

  1. How is it that the only restorable=true backup is somewhere in the middle of the history?
  2. Is the issue possibly that my database is not being heavily used, (since the totalBytes doesn’t seem to be changing much)
  3. Surely if MaxLogEndVersion is up-to-date, then preceding backups must have succeeded, but if that’s the case, then why is MaxRestorableVersion so far out-of-date?

Is there a way to “re-index” or “scan” the backups and refresh the backup metadata?

I’m on 6.3.16 currently - I’ll try on 6.3.22 today in case…

D

1 Like