I should have noticed and commented earlier…
Splitting work across transactions is a common pattern and there’s often a pretty clean way to make this work while maintaining ACID properties. Generally, the approach is to add a layer of indirection between the data “chunks” and the notion of a completed “file”. A data model could comprise two types of tuples, one for file paths and one for data chunks:
files/file1 = [id1]
files/file2 = [id2]
data/id1/chunk1 = [bytes]
data/id1/chunk2 = [bytes]
data/id1/chunk3 = [bytes]
data/id2/chunk1 = [bytes]
A process to make this work could be:
- At the client determine an unique ID for the data contents. Let’s say this
42 for this example.
- Begin, over the course of multiple transactions, to upload data into the
data section of the database.
- When the uploads are all complete, set the
files/x key pointing to the correct region of
A few considerations: this will work most simply for immutable files. Reads for especially large files could take more than 5 seconds and therefore have to span more than one transaction. This is trivial if the files are not changing and, if they are changing, could also be easily be done by simply pointing the
file to a new data chuck region (at the risk of reading consistent and complete copies of deleted or modified files).
Hopefully that something to get you started!