Slicing a key range to work with analytical engines (e.g. Spark)

Most analytical engines like Apache Spark slice the data into chunks and perform some sort of map/reduce action on them. In FDB, reading a range is well documented and there is this hint to not use offset for pagination through the data.
I’m using the Java Api, is there any way to slice the data into N ranges or multiple ranges with a fixed size? If not, what would be the best way to implement it?

This feature is being worked on, the issue tracking it is here:

1 Like

Thanks @SteavedHams … what would be the best workaround in the meantime? Does the LocalityUtil an option? gets you keys that split your keyrange into similarly sized chunks. This should be appropriate if you’re ok with your chunks being as large as 500mb

500mb is too big :worried:
These little things make the adoption very hard.