Some questions about the process performance details shown by "status details"

Hi!

I build a foundationdb cluster with three machine and I’m try to test its performance with ReadWrite wrokload with following test file.

testTitle=RandomReadWriteTest
    testName=ReadWrite
    testDuration=100
    transactionsPerSecond=10000
    writesPerTransactionA=0
    readsPerTransactionA=10
    writesPerTransactionB=5
    readsPerTransactionB=5
    ; Fraction of transactions that will be of type B
    alpha=0.2
    nodeCount=50000000
    ; keyBytes=16
    valueBytes=20
    discardEdgeMeasurements=false
    warmingDelay=20.0
    timeout=300000.0
    databasePingDelay=300000.0

During the test I use status details to monitor the performance of each process and I got the follwoing result which shows that the DISK IO in the machine with log server and storage server has been run out.

Process performance details:
  // 8 stateless processes
  172.31.19.187:4500     ( 56% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:4600     ( 54% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:4700     (  6% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:4800     ( 12% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:4900     (  1% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:5000     (  2% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:5100     ( 57% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  172.31.19.187:5200     (  1% cpu; 22% machine; 0.053 Gbps;  0% disk IO; 0.3 GB / 4.1 GB RAM  )
  // 8 test processes
  172.31.21.76:4500      ( 39% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:4600      ( 39% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:4700      ( 38% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:4800      ( 38% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:4900      ( 38% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:5000      ( 38% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:5100      ( 38% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  172.31.21.76:5200      ( 38% cpu; 33% machine; 0.172 Gbps;  0% disk IO; 0.3 GB / 4.6 GB RAM  )
  // 2 log + 2 storage processes
  172.31.25.57:4500      ( 67% cpu; 35% machine; 0.175 Gbps;100% disk IO; 0.9 GB / 4.2 GB RAM  )
  172.31.25.57:4600      ( 66% cpu; 35% machine; 0.175 Gbps;100% disk IO; 0.7 GB / 4.2 GB RAM  )
  172.31.25.57:4700      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.5 GB / 4.2 GB RAM  )
  172.31.25.57:4800      ( 34% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.1 GB / 4.2 GB RAM  )
  172.31.25.57:4900      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.3 GB / 4.2 GB RAM  )
  172.31.25.57:5000      ( 33% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.2 GB / 4.2 GB RAM  )
  172.31.25.57:5100      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.3 GB / 4.2 GB RAM  )
  172.31.25.57:5200      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.1 GB / 4.2 GB RAM  )
  172.31.25.57:5300      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.1 GB / 4.2 GB RAM  )
  172.31.25.57:5400      ( 33% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.4 GB / 4.2 GB RAM  )
  172.31.25.57:5500      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.1 GB / 4.2 GB RAM  )
  172.31.25.57:5600      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.0 GB / 4.2 GB RAM  )
  172.31.25.57:5700      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.1 GB / 4.2 GB RAM  )
  172.31.25.57:5800      ( 33% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.8 GB / 4.2 GB RAM  )
  172.31.25.57:5900      ( 32% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.3 GB / 4.2 GB RAM  )
  172.31.25.57:6000      ( 31% cpu; 35% machine; 0.175 Gbps;100% disk IO; 1.1 GB / 4.2 GB RAM  )

So at the begin I think the DISK IO has become a bottleneck and even I set a higher transactionsPerSecond issued, the throughput I got from metric will not increase.
However, in fact when I change the transactionsPerSecond from 10,000 to 40,000, the throughput metric shown also increase up to 40,000 transactions/s.

So I’m curious that why the DISK IO run out but the throughput can also increase with transactionsPerSecond I set in the test file?

Looking forward to help, thanks very much!