Uneven load distribution for different value types

I created a test with values of two types stored in FoundationDB with different key prefixes. Objects of first type occupy 10% of storage space but are processed (read and updated) in 90% of transactions. Objects of second type occupy 10% of storage space but are processed (read and updated) in 90% of transactions. I execute the test over the three nodes cluster with memory storage engine and double redundancy mode.

I see that load is distributed unevenly in the test, e.g. disk utilization per node is 42%, 2% and 54% (i got this from fdbcli status details command). As I understand this is due to that heavily loaded values have the same key prefix and fall to the same partition(s).

So the problem is that nodes of FoundationDB cluster get loaded unevenly if different key types (subspaces) are loaded unevenly. How to resolve this? Everything would be ok if values of different types could be stored in different key/value spaces each independently partitioned across all nodes but FoundationDB has only one such space.

Generally, our data distribution algorithm should be able to properly split up hot shards and distribute the evenly across the servers. The algorithm is better at balancing load for larger dataset sizes, because there are more shards to distribute to the storage servers, so if the total amount of data in your database is small that could be contributing to the problem.

You could set the following knobs which control this algorithm to make the bandwidth splitting twice as sensitive, and see if that helps.

shard_max_bytes_per_ksec=500000000
shard_min_bytes_per_ksec=50000000
shard_split_bytes_per_ksec=125000000