I’ve been reading the code for the resolver to understand how it works and I noticed the resolver is receiving the mutations for each transaction, but it only appears to be using the mutations to calculate data sizes rather than any operations on the data of the mutations themselves. This makes sense given only R/W conflict ranges are relevant for conflict detection AFAIK.
If that is correct, it seems like a bandwidth/CPU-saving optimization for write-heavy workloads would be to only send the size of the mutations in each transaction or the size of each
MutationRef in the transaction to the resolver rather than the
I was just wondering about this and have only been looking into it for about a half hour so I could easily be wrong.