Cluster tuning cookbook

Correct. (In the conf file, and on the command line.)

That’s correct though. The majority of proxy memory usage comes from processing and handling commits, and SERVER_MEM_LIMIT caps that. It is not a generic memory limiting knob though, unlike what its name implies.

Setting knob_server_mem_limit = ... in foundationdb.conf does not seem to be supported: the corresponding fdbserver process crashes with Process="fdbserver.4503": Unrecognized knob option 'server_mem_limit'

It seems that this particular knob is supposed to be configured via the --memory command line argument, which corresponds to the memory = ... field in foundationdb.conf. So I think I’m back to square one :slight_smile:

It appears the change that adds knob_server_mem_limit isn’t in 5.2, and I forgot that would be a relevant detail for you. :confused:

Oh, right. And I was also looking at the code in the master branch :slight_smile:

IMO the development part with foundationdb is a breeze but the operation part is daunting. I did not read the implementation details of so that’s maybe why I have a difficult time to understand the conversation around spinning a FDB cluster.

I believe the default is 2GB, which makes the page cache one of the bigger users of memory. There is also a 64k page cache which is 200MB.

Hi! I’ve started playing around with FoundationDB with an aim towards understanding its storage engine performance characteristics, and this thread (and the forum in general) is of great help! I’m using a similar test parameter set to test R/W performance (started with a single FDB instance, and single ssd or single memory engines):
"
testTitle=RandomReadWriteTest
testName=ReadWrite
testDuration=30.0
transactionsPerSecond=1000000
readsPerTransactionA=10
writesPerTransactionA=0
readsPerTransactionB=0
writesPerTransactionB=10
alpha=${rwmix}
nodeCount=10000000
valueBytes=100
"
However, I needed to increase the nodeCount to 10000000 to get ~1GiB “Sum of key value sizes” as reported in the fdbcli status.