Setting a hard limit on the cluster


Two quick questions:

  1. Is there a way we could impose a hard limit on the “Sum of key-value sizes” in the cluster? Basically, in no way more data could be written into the cluster once the sum of key value sizes reaches a hard-coded limit.

  2. Is there a way we could know how much more data can be ingested into the cluster based on the total storage available to the cluster?


AFAIK, fdb itself does not provide any such way; however you can control this by controlling the size of disk on which storage-server data directory is placed. FDB will stop handing out read-versions, once the remaining disk is “filled up” - something like min(100MB, 5% of total).

See this thread for more info on this.

As for your second question, you can see the amount of disk remaining for storage servers. This information can be checked directly from operating system, or from fdbcli status json output.

Appropriately accounting for replication factor, plus some fixed overhead (~50% of unreplicated data size), you could get a good estimate of how much more data can be written to an fdb cluster.

1 Like