Yes, I’ve benchmarked foundationdb with go-ycsb on our private cloud too, with both “single” and “triple” replication factor and RAM engine. But only get ~38K qps while get ~69K qps on my local laptop. And the following is my config:
1. This is my go-ycsb workload (100% read):
recordcount=10000000
operationcount=10000000
workload=core
readallfields=false
writeallfields=true
readproportion=1
updateproportion=0
scanproportion=0
insertproportion=0
requestdistribution=zipfian
fieldcount=1
fieldlength=20
2. This is my result. I used 2 machine to run go-ycsb independently, and get almost the same result on each.
./bin/go-ycsb run foundationdb -P workloads/workloadc -p fdb.cluster=./fdb.cluster -p threadcount=1000
READ - Takes(s): 10.0, Count: 391855, QPS: 39154.3, Avg(us): 25486, Min(us): 6377, Max(us): 88940, 95th(us): 58000, 99th(us): 69000
READ - Takes(s): 20.0, Count: 787808, QPS: 39374.7, Avg(us): 25376, Min(us): 6377, Max(us): 88940, 95th(us): 60000, 99th(us): 68000
READ - Takes(s): 30.0, Count: 1153112, QPS: 38426.9, Avg(us): 25980, Min(us): 6377, Max(us): 94534, 95th(us): 61000, 99th(us): 72000
READ - Takes(s): 40.0, Count: 1528051, QPS: 38193.7, Avg(us): 26169, Min(us): 6377, Max(us): 94534, 95th(us): 61000, 99th(us): 73000
READ - Takes(s): 50.0, Count: 1891968, QPS: 37833.4, Avg(us): 26422, Min(us): 3928, Max(us): 94534, 95th(us): 62000, 99th(us): 74000
READ - Takes(s): 60.0, Count: 2262222, QPS: 37698.7, Avg(us): 26508, Min(us): 3928, Max(us): 94534, 95th(us): 62000, 99th(us): 73000
READ - Takes(s): 70.0, Count: 2638403, QPS: 37687.1, Avg(us): 26520, Min(us): 3928, Max(us): 94534, 95th(us): 62000, 99th(us): 74000
READ - Takes(s): 80.0, Count: 2985904, QPS: 37320.1, Avg(us): 26785, Min(us): 3928, Max(us): 97336, 95th(us): 62000, 99th(us): 75000
READ - Takes(s): 90.0, Count: 3337194, QPS: 37076.7, Avg(us): 26956, Min(us): 3928, Max(us): 97336, 95th(us): 63000, 99th(us): 75000
READ - Takes(s): 100.0, Count: 3680602, QPS: 36803.1, Avg(us): 27162, Min(us): 3928, Max(us): 97336, 95th(us): 63000, 99th(us): 76000
READ - Takes(s): 110.0, Count: 4023143, QPS: 36571.4, Avg(us): 27327, Min(us): 3928, Max(us): 262296, 95th(us): 63000, 99th(us): 76000
READ - Takes(s): 120.0, Count: 4387477, QPS: 36559.9, Avg(us): 27344, Min(us): 3928, Max(us): 262296, 95th(us): 63000, 99th(us): 76000
READ - Takes(s): 130.0, Count: 4751617, QPS: 36548.7, Avg(us): 27353, Min(us): 3928, Max(us): 262296, 95th(us): 63000, 99th(us): 76000
READ - Takes(s): 140.0, Count: 5110191, QPS: 36499.3, Avg(us): 27385, Min(us): 3928, Max(us): 262296, 95th(us): 64000, 99th(us): 76000
READ - Takes(s): 150.0, Count: 5457514, QPS: 36381.5, Avg(us): 27479, Min(us): 3928, Max(us): 262296, 95th(us): 64000, 99th(us): 76000
READ - Takes(s): 160.0, Count: 5801829, QPS: 36259.6, Avg(us): 27570, Min(us): 3928, Max(us): 262296, 95th(us): 64000, 99th(us): 76000
READ - Takes(s): 170.0, Count: 6151706, QPS: 36184.8, Avg(us): 27627, Min(us): 3928, Max(us): 262296, 95th(us): 64000, 99th(us): 76000
READ - Takes(s): 180.0, Count: 6507109, QPS: 36149.0, Avg(us): 27651, Min(us): 3928, Max(us): 262296, 95th(us): 64000, 99th(us): 76000
READ - Takes(s): 190.0, Count: 6832262, QPS: 35957.8, Avg(us): 27803, Min(us): 3928, Max(us): 262296, 95th(us): 65000, 99th(us): 77000
READ - Takes(s): 200.0, Count: 7173362, QPS: 35865.4, Avg(us): 27868, Min(us): 3928, Max(us): 262296, 95th(us): 65000, 99th(us): 77000
READ - Takes(s): 210.0, Count: 7531922, QPS: 35864.9, Avg(us): 27875, Min(us): 3928, Max(us): 262296, 95th(us): 65000, 99th(us): 77000
READ - Takes(s): 220.0, Count: 7882459, QPS: 35828.1, Avg(us): 27904, Min(us): 3928, Max(us): 262296, 95th(us): 65000, 99th(us): 77000
READ - Takes(s): 230.0, Count: 8226143, QPS: 35764.6, Avg(us): 27952, Min(us): 3928, Max(us): 262296, 95th(us): 66000, 99th(us): 77000
READ - Takes(s): 240.0, Count: 8575813, QPS: 35731.4, Avg(us): 27980, Min(us): 3928, Max(us): 262296, 95th(us): 66000, 99th(us): 77000
READ - Takes(s): 250.0, Count: 8960553, QPS: 35841.1, Avg(us): 27894, Min(us): 3928, Max(us): 262296, 95th(us): 66000, 99th(us): 77000
READ - Takes(s): 260.0, Count: 9311212, QPS: 35811.3, Avg(us): 27918, Min(us): 3928, Max(us): 262296, 95th(us): 66000, 99th(us): 77000
READ - Takes(s): 270.0, Count: 9662951, QPS: 35787.7, Avg(us): 27935, Min(us): 3928, Max(us): 262296, 95th(us): 66000, 99th(us): 78000
Run finished, takes 4m39.745710336s
READ - Takes(s): 279.7, Count: 10000000, QPS: 35749.8, Avg(us): 27952, Min(us): 1045, Max(us): 262296, 95th(us): 66000, 99th(us): 78000
3. This is detailed status on each fdb instance:
fdb> status details
Using cluster file `/etc/foundationdb/fdb.cluster’.
Configuration:
Redundancy mode - triple
Storage engine - memory
Coordinators - 3
Cluster:
FoundationDB processes - 3
Machines - 3
Memory availability - 7.6 GB per process on machine with least available
Retransmissions rate - 1 Hz
Fault Tolerance - 0 machines (1 without data loss)
Server time - 05/14/18 18:22:56
Data:
Replication health - Healthy
Moving data - 0.000 GB
Sum of key-value sizes - 226 MB
Disk space used - 2.727 GB
Operating space:
Storage server - 493.6 GB free on most full server
Log server - 493.6 GB free on most full server
Workload:
Read rate - 81502 Hz
Write rate - 1 Hz
Transactions started - 80006 Hz
Transactions committed - 0 Hz
Conflict rate - 0 Hz
Backup and DR:
Running backups - 0
Running DRs - 0
Process performance details:
10.5.0.23:4500 ( 40% cpu; 5% machine; 0.027 Gbps; 1% disk IO; 0.8 GB / 7.6 GB RAM )
10.5.0.52:4500 ( 41% cpu; 5% machine; 0.028 Gbps; 1% disk IO; 0.8 GB / 7.6 GB RAM )
10.5.0.55:4500 ( 42% cpu; 5% machine; 0.026 Gbps; 1% disk IO; 0.9 GB / 7.6 GB RAM )
Coordination servers:
10.5.0.23:4500 (reachable)
10.5.0.52:4500 (reachable)
10.5.0.55:4500 (reachable)
Client time: 05/14/18 18:22:56
4. And I used 3 fdb instances as coordinators
I haven’t figured out where is the bottleneck (fdb go client or go-ycsb)