I’m pretty new to FDB and I have some confusion here on the storage engine. Please correct me if I’m wrong and asking dumb questions.
For memory, fdb tries to fit everything into memory, but at the meantime it also logs data to disk for backup. So when the data size exceeds the memory, what will fdb do? Does it function like a cache that oldest data is popped out? And when I want to access that piece of data fdb will look it up in the disk and thus causing a long latency?
For ssd, does fdb commit all transactions directly to disk? Does it still have the memory functioning as a cache?
I’m currently doing a benchmarking on FDB and met some problem here. I’m using 8 EC2 m5a.large each with 2 vCPUs and 8GB Memory. And I attached 4000GB gp2 disk to each of them when launching (on “/dev/sda1”). I have only one fdbserver running on each instance. Triple Redundancy.
I’m doing a write benchmark, with each key size of 256 bytes and value size of 1000 bytes. At the very beginning, the throughput reaches ~3000 ops/sec but it quickly drops to <1000 ops/sec in less than 20 seconds. I’ve monitored the status details from fdbcli as following,
Process performance details: 100.90.8.233:4500 ( 33% cpu; 20% machine; 0.040 Gbps; 92% disk IO; 5.8 GB / 7.1 GB RAM ) 100.90.9.50:4500 ( 15% cpu; 10% machine; 0.016 Gbps; 92% disk IO; 4.3 GB / 7.1 GB RAM ) 100.90.11.115:4500 ( 15% cpu; 10% machine; 0.008 Gbps; 93% disk IO; 4.2 GB / 7.2 GB RAM ) 100.90.33.45:4500 ( 33% cpu; 21% machine; 0.046 Gbps; 94% disk IO; 6.2 GB / 7.1 GB RAM ) 100.90.35.228:4500 ( 24% cpu; 16% machine; 0.030 Gbps; 93% disk IO; 4.4 GB / 7.1 GB RAM ) 100.90.46.60:4500 ( 45% cpu; 28% machine; 0.065 Gbps; 93% disk IO; 5.8 GB / 7.1 GB RAM ) 100.90.47.93:4500 ( 32% cpu; 19% machine; 0.043 Gbps; 87% disk IO; 4.2 GB / 7.0 GB RAM ) 100.90.54.82:4500 ( 13% cpu; 9% machine; 0.009 Gbps; 90% disk IO; 4.2 GB / 7.2 GB RAM )
I’ve noticed that the diskIO remains high at all times, so is it the bottleneck of my database? What do you suggest doing in my case?
Thank you in advance.