I created a test with values of two types stored in FoundationDB with different key prefixes. Objects of first type occupy 10% of storage space but are processed (read and updated) in 90% of transactions. Objects of second type occupy 10% of storage space but are processed (read and updated) in 90% of transactions. I execute the test over the three nodes cluster with memory storage engine and double redundancy mode.
I see that load is distributed unevenly in the test, e.g. disk utilization per node is 42%, 2% and 54% (i got this from fdbcli status details command). As I understand this is due to that heavily loaded values have the same key prefix and fall to the same partition(s).
So the problem is that nodes of FoundationDB cluster get loaded unevenly if different key types (subspaces) are loaded unevenly. How to resolve this? Everything would be ok if values of different types could be stored in different key/value spaces each independently partitioned across all nodes but FoundationDB has only one such space.