The currently advertised limit is ~500 fdbserver processes per cluster, as somewhere before 1000 the poor cluster controller doing all the failure monitoring becomes overwhelmed with responses. How this translates into storage volume depends on the size of disk you have attached to each fdbserver process and performance requirements. Running >1 storage servers per disk will result in better disk IOPS utilization, but reduce your maximum data size by a constant factor.
Recovery time will scale with data volume, as FDB is required to load the map of key range to shard before accepting commits again. I suspect that pushing the limits of total data volume in FDB to the 1PB or higher level would probably lead to 10s recoveries. Work slated for FDB6.2 includes both changing failure monitoring to raise the number of processes limit, and reduce recovery times for large clusters.
I’d also be interested to hear where you’re getting your FoundationDB scaling limitation rumors from?