Just throwing it out there since this is a 5.x behavior we haven’t seen (we track event types). The master on a busy cluster seems to be constantly moving resolution ranges. The code isn’t any different in https://github.com/apple/foundationdb/blob/fc098586a12c23771db5246602462ab8e2ef88f5/fdbserver/masterserver.actor.cpp#L955 so just seeing if that’s something folks have seen. Even the value of the knobs are the same.
Looking at the src and dest of these events it seems to be oscillating ranges between resolvers.
So it’s possibly because the knob value STORAGE_METRICS_AVERAGE_INTERVAL has changed from 10 to 120 in 5.x. Will need to test to see if it has any effect to calming down resolver range moves.
I finally looked into this. I do not think you want to change STORAGE_METRICS_AVERAGE_INTERVAL because, it will also change how the storage server and data distribution track sizes.
The knob you want is MIN_BALANCE_DIFFERENCE, which is how many bytes per second different the resolvers need to be before moving a range.
Cool, will look into that. Ultimately need to pull out this metric so that we can observe it.