If you run fdbcli> status on this cluster, it should give you a warning like:
WARNING: A single process is both a transaction log and a storage server.
For best performance use dedicated disks for the transaction logs by setting process classes.
EDIT: Ah, the code only logs this warning if you have 10 or more processes…
So I’d suggest setting process_class=stateless on process 0, process_class=log on process 1, and process_class=storage on the rest, which will at least isolate different classes of priority work. If you search these forums for “process class”, there’s a number of other threads that go into process classes, and how/why to configure them.
However, I’m concerned that even if you continue with this test, it isn’t really going to be representative of what running a real workload against a real cluster would be like, which is I assume why you’re doing this benchmarking?
By running your transaction logs and storage nodes on the same disk, the sequential transaction log workload is no longer sequential for the disk, and it will be fighting for fsyncs against the rest of your processes. One machine with eight attached disks would be better.
Though I greatly appreciate that someone contributed a YCSB client for FDB, YCSB is not a great benchmarking workload for FDB, as the results don’t extrapolate out to other workloads well. The implementation did one transaction per key read or per key written, which means it’s actually benchmarking the GetReadVersion operation that starts a transaction and not reads or writes. Real world workloads tend to do multiple operations per transaction, which amortizes the cost of starting a transaction.
FoundationDB clients also have only one thread that handles FDB network traffic in the background, which can be easily saturated by a sustained benchmarking workload. Running multiple client processes would be the better thing to do.
YCSB is written against the 5.2 bindings, but you can still download a 6.2 client library from foundationdb.org, point to it with FDB_NETWORK_OPTION_EXTERNAL_CLIENT_DIRECTORY, and run against a 6.2 cluster.
If you search this forum for “multitest”, you’ll find examples of the tooling that’s built into fdbserver that we use for benchmarking FoundationDB.
Overall, if you’re looking for 500MB/s per host/disk, then FDB isn’t going to be able to provide that. (And some SSDs can’t either.)