The RedwoodMetrics trace events will help answer this question, if you can make them available to me I should be able to tell you what is going on.
The storage servers do not care about or see client count, they just see (Version, Mutation)...
and send them to the storage engine in order with a commit()
periodically based on a time or byte limit, whichever comes first. Therefore it does not matter how many clients are generating the random writes, it only matters what the writes are in each commit batch on each storage server.
The extra writes are likely coming from the Pager, yes, but it’s probably the case that in the lower throughput regime the Pager is having to make writes that it was able to skip in the higher throughput regime.
Redwood does not exactly have a WAL in the traditional sense. It writes new or updated pages onto a free page, and in the case of updated pages it writes a record to a pager log that says “logical page X as of version V is now located at physical page Y”. Eventually, in order to truncate records from the front of this pager log so that it does not grow larger indefinitely, after data versions prior to V are no longer being maintained the contents of Y might be copied onto physical page X. I say “might” because this copy will be skipped if it is possible to do so without data loss. The more write activity you have, the more likely it is possible to do this.
One mechanism to skip some of these writes is that the log truncation intentionally lags behind the oldest readable commit version so that when a remap entry is popped from the front of the log the copy can be skipped if it is known that the page is updated again or freed prior to the first retained readable version. The longer the remap cleanup window is (this is a knob, defaults to 50 storage engine commits which, due to other knobs, equates to up to 25 seconds but will be less under high write load) the more skippable writes of this form there will be.
Another mechanism is that if, during the remap cleanup window, multiple sibling BTree nodes under the same parent nodes are updated, then the BTree will update the parent node to point directly to the new child locations so when the child remap entries are truncated from the log the new page data does not have to be copied onto the original page.
Most likely, these two optimizations are what reduce your write amplification with higher throughput. The RedwoodMetrics trace events will show whether or not this is the case.