I have a workload which has three tasks:
A. Inserting new keys
B. Aggregating those inserted keys through some function which takes input of N keys and outputs 1 key. This tasks performs many small range reads of the un-aggregated keys and decides to aggregate when the number of keys in a given range grows too large.
C. Reading the aggregated and un-aggregated keys in a range at snapshot isolation
Task B is critical to ensure task C completes in a timely manner. Task A is also important, but mostly useless if B doesn’t happen soon enough.
Is it reasonable to use a batch priority transaction for Task C? Retrying later is generally OK for this task, so I’m hoping this means read operations from Tasks A and B will always be serviced first if running concurrently with operations from Task C.