I don’t know much about increasing resolvers, I will let someone else answer that.
But, if the resolver is the only process CPU bound and all other processes are doing good, probably it’s worth reviewing the data model. You should check and make sure you are not generating too many conflict ranges.
If you have too many point-gets or sets, that could cause too many conflict ranges. Then, resolvers have to do too many range comparisons. Instead, if you find a smallest overlapping range and add that as the conflict range that would be better for resolver performance.
Document Layer inserts can be a very good example here. Document Layer stores each JSON field in a different FDB key with the same prefix. For example, let’s say I have a document in the collection employee
looking like
{
"_id": 345,
"name": "Eric",
"marks": 90,
"grade": A
}
Note that _id
is the primary key. So, the document is stored with key _id
. This would be stored under 4 different keys
employee:345: -> _
employee:345:name -> Eric
employee:345:marks -> 90
employee:345:grade -> A
When you insert these keys, it will create 4 write conflict ranges, one per each key. If my document can have embedded documents with deep arrays, this could get much worse. This keeps resolver super busy. To avoid this document layer explicitly adds a write conflict range covering all keys. In this case, adds the conflict range for prefix employee:345:
. That would reduce the work for resolver 4 times, depending on the document size this could make or break.
If you already made sure that’s not the case, just ignore this