Here are some information I can think of, but I believe there might be some additional reasons.
From a pure software point of view, FoundationDB has limitation in term of scaling (doc). The most limiting factor is the 100 TB disk size, even though you could go bigger than that, FDB simulation didn’t go test situations with bigger size. In addition to this, components like proxies, data distributors, storage servers are able to scale, but again they have been testing within limitations.
Moreover, due to FoundationDB authorisation system, if your credentials of the cluster are leaking, unattended software can have access to the whole database and thus to the data. That’s because it doesn’t intend to give an AuthZ / AuthN features. Something that is close to authorisation would be tenancy added in 7.3
. It is currently used in production by companies, but multi-tenancy could need another cluster (called Meta-cluster) which contains metadata in order to work with specific architectures. That’s for that reason you’d also like your clusters to be in isolated networks.
For those two points, you can check this thread. It’s kind of out of date, but can give you some more hints about security and scalability limits of clusters.
Then, for clusters holding personal information, GDPR laws restrict the data you can store related to a person. Those laws change between regions (e.g US vs EU), and as FoundationDB replicates the data by design, you might not want to have EU data in US region (and vice-versa).
Finally, there are cases where you don’t want the same replication factor depending on criticality of the data. More than this, for network latency reasons, you’d rather have a cluster close to customer than a global one for the whole region.