Cluster tuning cookbook

(Alex Miller) #21

Correct. (In the conf file, and on the command line.)

That’s correct though. The majority of proxy memory usage comes from processing and handling commits, and SERVER_MEM_LIMIT caps that. It is not a generic memory limiting knob though, unlike what its name implies.

(Christophe Chevalier) #22

Setting knob_server_mem_limit = ... in foundationdb.conf does not seem to be supported: the corresponding fdbserver process crashes with Process="fdbserver.4503": Unrecognized knob option 'server_mem_limit'

It seems that this particular knob is supposed to be configured via the --memory command line argument, which corresponds to the memory = ... field in foundationdb.conf. So I think I’m back to square one :slight_smile:

(Alex Miller) #23

It appears the change that adds knob_server_mem_limit isn’t in 5.2, and I forgot that would be a relevant detail for you. :confused:

(Christophe Chevalier) #24

Oh, right. And I was also looking at the code in the master branch :slight_smile:

(Amirouche) #25

IMO the development part with foundationdb is a breeze but the operation part is daunting. I did not read the implementation details of so that’s maybe why I have a difficult time to understand the conversation around spinning a FDB cluster.

(A.J. Beamon) #26

I believe the default is 2GB, which makes the page cache one of the bigger users of memory. There is also a 64k page cache which is 200MB.


Hi! I’ve started playing around with FoundationDB with an aim towards understanding its storage engine performance characteristics, and this thread (and the forum in general) is of great help! I’m using a similar test parameter set to test R/W performance (started with a single FDB instance, and single ssd or single memory engines):
However, I needed to increase the nodeCount to 10000000 to get ~1GiB “Sum of key value sizes” as reported in the fdbcli status.