Yeah, that’s possible. You would specify that by using the -k
flag multiple times, e.g.:
fdbdr start -s source.cluster -d destination.cluster -k "range_1_start range_1_end" -k "range_2_start range_2_end"
One caveat here is that the cluster locking that the DR process does is done over the whole cluster rather than just the DR destination range, so if you wanted to DR a subspace somewhere else and continue doing other things in other ranges on that destination, you might run into problems. I think in your particular case that you will run into issues when you move the data from one FDB cluster to the other if both of these shared clusters are active.
I don’t know if I’d say that locking is done at the cluster level for any deep architectural reason (instead I expect it is because the most common case is expected to be using DR to backup an entire cluster to a secondary cluster), but adding locking on a key-range basis would probably require a fair amount of plumbing and performance wise would probably be about the same as checking a lock at the layer level.
If you’re interested in doing something multi-tenanted on top of FDB, then I would say that probably handling that in some kind of isolation layer on top of FDB is probably the correct call. I will point out that FDB itself doesn’t offer any kind of login mechanism, so if you have two tenants hitting the same cluster, nothing can stop one client from reading the data associated with another client unless you do something like put a proxy layer in-between that handles that. That’s the main security concern I would have. Additionally, there is no per-client load balancing done at the FDB level, so if you have two users sharing this cluster, if one client saturates the database, the other client will also see degraded performance (despite doing nothing else). If you wished, you could consider this to be a vector for a DOS attack (in that one client can deny the other client access to the database), but I would usually just think of this as a kind of performance pathology.
To move data around, there isn’t really a better option currently than handling that yourself, either by stopping all operations for a user while you shift data from one cluster to another and then starting them back up once the data have been moved or by doing something more sophisticated where you are careful about where and when you send updates and serve reads. (For example, when you move data, you could start logging all updates to the source cluster for the subspace you are moving using versionstamp keys to maintain transaction ordering while simultaneously copying over the subspace you’re copying. While you do these reads, you also read from the list of updates and apply any to ranges you’ve already copied over. You keep doing this until you’ve moved everything from the source to the destination. Then you stop taking writes, copy over any lingering updates, and then make the destination active. You serve reads from the source cluster until you’ve copied everything over, at which point you start reading from the destination cluster.) If you’re doing something like keeping a mapping (somewhere) of a subspace to shared cluster so you know where to read, you can keep meta-data about which cluster ranges are locked and which ones are being moved. You have to be somewhat careful here if you cache that result (which, like, not the most unreasonable thing), but I think it could be done.