HI,
We wrote a script that simulate multi-clients/multi-threads data loading (put) and querying (get). When we test the script against FDB 6.2 and FDB 7.1, when we set the number of records to 100k (to load, key size 100bytes, value size 500), 7.1 is slightly performing better. But when we increase the number of records to 500k, 6.2 is 3 times better in loading and twice as fast in terms of query. I am wondering what is going on that cause this. 7.1 suppose to be faster. We use the default sqlite engine so they should be about the same if not better for 7.1? The tests are conducted on the same cluster so they use the same hardware, resources and storage. What could have caused the slowness in 7.1 when we just increase the number of records 5x more?
It’s hard to evaluate without looking at the exact configuration and metrics of the cluster. I’d expect 7.1 to perform better, but there could be some configuration, e.g., GRV/Commit proxy count, affecting performance. Also, if there is some reason of throttling of the ratekeeper.
Is there any documentation on the grv/commit proxy best-practice like how many are needed usually say if we have a 3 storage, 4 log cluster?