Cluster:
FoundationDB processes - 6
Zones - 5
Machines - 5
Memory availability - 7.2 GB per process on machine with least available
Retransmissions rate - 1 Hz
Fault Tolerance - 1 machines
Server time - 10/21/24 15:10:46
Your database is very small and your restore point would need under 1 minutes of mutation logs, so the restore job itself is trivial in size. This means the problem is likely that your backup_agent processes were not running consistently, or were not stable, or one or more of them cannot access the backup data.
Restore jobs execute as a bunch of individual “tasks” on the backup_agent processes, so if one of them can’t access the backup data it can claim a Task in the database and then fail to read the backup data and retry periodically, eventually failing and releasing ownership of the Task for another backup_agent or perhaps itself to claim again later.
A common reason for backup_agent processes being unable to access backup data is that you’re using a local filesystem for backup data which is not mounted on all of the hosts where backup_agent runs. Given that your screenshot shows backup data in /tmp, I would suggest that if you have other hosts running backup_agent process they cannot access this path as it only exists on the host you ran the fdbrestore client on. Note that fdbrestore does not actually do any restore work, it only controls the Restore job states and can create/monitor/abort them.