We are running weekly backups of our FDB clusters. In our most long-lived clusters, there is now around 180 backups under the \xff\x02/backup-agent/tag->uid/
prefix.
For the biggest clusters, we are observing that the backup-agents fail with UnableToWriteStatus
and value_too_large
.
<Event Severity="30"
Time="1742394753.496389"
DateTime="2025-03-19T14:32:33Z"
Type="UnableToWriteStatus"
ID="0000000000000000"
Error="value_too_large"
ErrorDescription="Value length exceeds limit"
ErrorCode="2103"
Machine="10.240.230.198:1"
ClientDescription="primary-7.1.43-18304034715893442826" />
It looks like it is trying to write information about all backups to a single key under \xff\x02/backupstatus/backup/json/
, and fails because the value is more than 100kb.
For the affected clusters, the status json
no longer has any information about backups.
cluster.layers.backup
is set to null
The backups themselves succeed.
It does not look like the fdbbackup
command supports deleting backup information from FDB. So far we have created a script to delete the backups from \xff\x02/backup-agent/tag->uid
and \xff\x02/backup-agent/uid->config
.
Is this a know issue?