I would fully expect that calling db.asyncRun()
and letting the transaction execute asynchronously in the background should result in better write throughput. A local test for me shows that running 1000 sequential transactions that set an increasing integer key to a 1KB value takes 9.95s, implying my local fsync() takes ~100ms. Converting that to run asynchronously on a thread pool causes it to take 1.296s. So I’m very surprised to hear that converting run
to runAsync
didn’t help, and I’d suggest taking a second look at the code to make sure you didn’t accidentally re-serialize the writes somehow. And to be fair, lucene definitely isn’t calling fsync() after each block write, as the sequential series of transactions is doing.
With “Note the tests are done locally on a development machine”, do you mean that you’re running a benchmark against the default created FoundationDB instance? As you might have noticed from the numbers above, the performance is terrible. With the FDB activity going on around Cloudant, is there anyone with a small FDB cluster that you could borrow a subspace on? Our performance characteristics get much better as soon as one can separate the transaction logs from the storage servers, and being closer to something that resembles a production cluster is generally nice when doing performance work.
The importance of the single network thread is that after a certain point, you can’t gain more parallelism in issuing requests to FDB by adding more threads within your JVM. All of your JVM threads will still be submitting requests to the one FDB network thread, and once you saturate the FDB network thread, that’s the maximum throughput you’re going to get. I would strongly suspect that you aren’t going to saturate the FDB network thread from a single threaded lucene though. It’s mostly just a thread doing a memcpy and send/recv as far as I’m aware.
I’d ballpark the “optimum transaction to chunk size” as around 50KB-100KB commits. Our general advice is to stay under 1MB in total for commits, and FDB will batch transactions together if it can accept more. I agree that in your use case, you’ll be inserting data sequentially, and that should be okay. Bulk loading an empty cluster sequentially puts a lot of strain on data distribution to try and carve out sensible shards.
And tangentially, I’ve taken this opportunity to finally go file an issue to track optimizing shard splitting for sequential insert workloads #1322.
Overall, full text indexes backed by FDB seem feasible, but I unfortunately don’t know enough about the particulars of lucene to be able to comment about it specifically. The FoundationDB Record Layer provides a full-text index on FDB. The authors of that would be able to speak to this better than I, so I’ll leave it to @alloc and @nathanlws. The former has a previous post on the subject of the full text indexing that you might find interesting. The latter you’ll recognize as also the author of the lucene layer you’re using.