I am now running a cluster with 60 Kubernetes Storage pods (in a single DC) with each pod having 3 storage processes. I would like to know how I should scale the number of the log processes correspondingly. Certainly different write workload would require different number of the log processes. So the related question is, what are the performance metrics that I can found in the status.json that allow me to determine whether the log processes get saturated or not.
… man, we really need to go clean up and rewrite most of the docs surrounding recommended configuration and performance numbers.
It really depends on your storage characteristics. For running on physical SSDs, which isn’t the network attached storage that Kubernetes deployments sometimes have, we’ve typically seen something around 1:8 for ssd and 1:2 for memory being near optimal. Running a write benchmark and seeing if it improves if you add more logs is the easy and accurate way to find your optimal ratio.
A related question: What about the number of the proxies? The FDB architecture document only states the minimum number for class=stateless processes is 4 proxies. Should that the number of the proxies be increased with the number of the storage pods as well, or the number can be fixed?
My personal benchmarking has generally found keeping number of proxies and number of logs roughly the same is optimal. I found diminishing returns for each additional proxy added above the number of logs. Depending on your particular workload, this could change, but it’s probably a decent starting place.