Distributed transaction with pushdown predicates

I’m working on a graph layer for FDB and thought it’s not on the near term roadmap, I’m interested in the viability of implementing predicate pushdown to move as much filtering as possible local to the data as opposed to the current scatter/gather approach. I read in another thread that it’s possible to identify which nodes own which key ranges so I think query routing can be handled as things are already written. The piece I’m curious about is if it would be possible to encompass all of this in a single transaction. Reading the API and architectural docs, I can’t see a way to do this now. My question is, do you all think this would be possible at some point or should I perhaps rethink my approach?

Thanks,
Ted

This is a good question. To do pushed down reads serializably, I think you need three things:

  1. Send the snapshot read version of the transaction to the nodes doing the “pushed down” reads, so that they read from the same snapshot. This is very easy to do with the current API.

  2. Add appropriate read conflict ranges to the master transaction you are going to commit, to reflect the pushed down reads. In most cases this should be trivial - you already know what range of data you are scanning (in order to even get locality information to push the reads down), so just add the whole range. (And of course if you want the pushed down reads to be snapshot isolated, then you don’t have to do anything)

  3. If there are any previous writes in the transaction, the pushed down reads should take them into account. This sounds like a pain in the neck, but in a lot of practical cases you don’t need to do it.

I could imagine an API to make (3) easier, if it turned out that important use cases actually hinge on it. Basically you would ask the transaction for a list of writes from a given key range, send that with your push down request, and then do them locally before reading.

Excellent, thanks Dave. I identified 2 as an option but then got stuck on what you pointed out in 1, I must have missed that capability in the Java driver. 3 will be a little trickier but perhaps I can check or local transaction cache which includes uncommitted mutations prior to distributing the pushed down reads to see if they will affect any of the elements I’m triggering a read of. Having said that, your proposed API sounds like it may be a better approach.

Thanks much,
Ted