Hey, I use FDB 6.2 and all requests use the directory layer. I noticed I have pretty bad storage process query hotspotting due to the heavy usage of the directory layer. One of my storage processes is always CPU-bound, and its total_queries
metric in status json
is ~70x the other storage processes. I ran the transaction profiler analyzer to find that all the read requests were in the directory layer nodeSS metadata subspace. From the data distributor internals it seems like data distribution is based solely on the size and write rate, and not the read rate, which is why the entire subspace is on one storage process. I’m curious if anyone has run into this and has any workarounds?
One thing I’ve had to rule out is caching the mapping in memory. I can’t only rely on this because a remote client might delete/move directories and I haven’t found an out-of-the-box way to validate a []byte -> []string
directory prefix mapping without going through the directory layer. I can’t store my own reverse mapping (for example, writing the []string
directory path at the start of each directory subspace) because:
- We have a lot of query patterns that range scan the entire directory, so we can’t write inside an existing directory
- We use the default AllKeys for contentSS so it’s unsafe to store keys outside a directory
Is it possible to manually move data? Then I could split the nodeSS shard manually for my most CPU-bound clusters. Also, would anyone ever consider making nodeSS replication a feature in the directory layer? So that when a directory is created/moved/removed it updates all nodeSS-es, and when you fetch a directory subspace it randomly queries one nodeSS.