What would be the best way to parallelize reads on the client given a certain query plan? Imagine like in a Apache Spark type setting.
First off I understand there will probably be a bunch of limitations. However, I was thinking I could maybe use locality info of the particular record type to give me ranges to scan and then use that with the continuation field?
I would also somehow have to specify an “end” tuple, which might not be possible from the planner?