I read on hackernews that FDB server is a single threaded process (https://news.ycombinator.com/item?id=8729420). Is that correct?
If it is indeed a single threaded server, then what is a suggested deployment model for FDB server on multi-core machines if we want to utilize all cores? Should we run multiple FDB servers on every single machine in the cluster? But in that case, a single machine or disk going bad will bring down multiple FDB instances going down…
Is there any help/guidance on how should required memory be computed for the server instace? Is it more like - there is minimum X GB RAM that is required, and after that the more you have, the better will be performance (up-to a limit)? Or is there a more systematic way to reason about how much memory to allocate to a server instance?