Fdbbackup expire with blobstore returning error, not deleting from S3

@SteavedHams after resolving the expire issue we rebuilt the cluster and kicked off a new fdbbackup. Each night at 2:00AM PST there is a job that runs and expires any backups over 7 days old. It’s currently showing the last couple days of backups as restorable=false.

fdbbackup describe -d blobstore://xxxxxxxxxxxxx@s3.us-west-2.amazonaws.com/blah?bucket=some_bucket
URL: blobstore://xxxxxxxxxxxx@s3.us-west-2.amazonaws.com/blah?bucket=some_bucket
Restorable: true
Snapshot: startVersion=353447944273 (maxLogEnd -2.94 days) endVersion=353448585130 (maxLogEnd -2.94 days) totalBytes=50749583 restorable=true
Snapshot: startVersion=413355786099 (maxLogEnd -2.24 days) endVersion=413356599485 (maxLogEnd -2.24 days) totalBytes=63749211 restorable=false
Snapshot: startVersion=413900113933 (maxLogEnd -2.24 days) endVersion=499582938750 (maxLogEnd -1.25 days) totalBytes=11386890817 restorable=false
Snapshot: startVersion=499640399747 (maxLogEnd -1.25 days) endVersion=585963699169 (maxLogEnd -0.25 days) totalBytes=19205766899 restorable=false
SnapshotBytes: 30707156510
ExpiredEndVersion: 0 (maxLogEnd -7.03 days)
UnreliableEndVersion: 0 (maxLogEnd -7.03 days)
MinLogBeginVersion: 353447805204 (maxLogEnd -2.94 days)
ContiguousLogEndVersion: 353467805204 (maxLogEnd -2.94 days)
MaxLogEndVersion: 607311681429 (maxLogEnd -0.00 days)
MinRestorableVersion: 353448585130 (maxLogEnd -2.94 days)
MaxRestorableVersion: 353467805203 (maxLogEnd -2.94 days)

A fdbbackup status shows the following.

fdbbackup status
The backup on tag `default’ is restorable but continuing to blobstore://xxxxxxx@s3.us-west-2.amazonaws.com/blah?bucket=some_bucket.
Snapshot interval is 86400 seconds. Current snapshot progress target is 25.32% (>100% means the snapshot is supposed to be done)

Details:
LogBytes written - 19919852683
RangeBytes written - 35519783194
Last complete log version and timestamp - 607832819857, 02/22/19 15:59:41
Last complete snapshot version and timestamp - 585963699169, 02/22/19 09:55:11
Current Snapshot start version and timestamp - 585964113623, 02/22/19 09:55:11
Expected snapshot end version and timestamp - 672364113623, 02/23/19 09:55:12
Backup supposed to stop at next snapshot completion - No
Older Errors
1.82 day(s) ago : ‘Task execution stopped due to timeout, abort, or completion by another worker’ on ‘file_backup_write_logs_5.2’

I am a bit confused on how to interpret the describe restorable=false, particularly in lieu of the status showing.

Last complete log version and timestamp - 607832819857, 02/22/19 15:59:41
Last complete snapshot version and timestamp - 585963699169, 02/22/19 09:55:11

Does this mean if I were to attempt a restore the past few days data would be lost as the Max RestorableVersion shows?

MaxRestorableVersion: 353467805203 (maxLogEnd -2.94 days).

If so do you have any tips on debugging or actions to take (i.e. wipe out backups and start over or should it correct itself)? I am also curious if the first expire that was executed before there was 7 days of backups might have somehow affected restorability?

— snip job log —
info expiring data before 2019-02-13 10:00:00.778124264 +0000 UTC m=-545335.225315229
[ server ] 02-20 04:00:01 ERROR - 2019/02/20 10:00:01 info expire output is All data before version 0 is deleted.