Looking for feedback on a Go FoundationDB blob store

I created a blob store layer in Go fdb-blobs, but I’m pretty new to both Go and FoundationDB, so I’m looking for feedback, especially around the API, but also the technical usage of FoundationDB.

Thanks in advance.

Hello, and welcome.

I was most interested with the storing part of the blob store you built. The code is at:

Eventually it goes into store.write at:

What I think is interesting to have is the ability to restart an upload.

Chunk size is known at compile time, it is should be the maximum size of value 100ko. I see no reason to use another chunk size?

Here is my blob store in python:

async def get_or_create(tx, bstore, blob):
    hash = hasher(blob).digest()
    key = found.pack((bstore.prefix_hash, hash))
    maybe_uid = await found.get(tx, key)
    if maybe_uid is not None:
        return UUID(bytes=maybe_uid)
    # Otherwise create the hash entry and store the blob with a new uid
    # TODO: Use a counter and implement a garbage collector, and implement
    # bstore.delete
    uid = uuid4()
    found.set(tx, key, uid.bytes)
    for index, slice in enumerate(sliced(blob, found.MAX_SIZE_VALUE)):
        found.set(tx, found.pack((bstore.prefix_blob, uid, index)), bytes(slice))
    return uid

Your code is better because it allows to store blobs bigger than the maximum size of a transaction 10Mb.

Hey @amirouche thanks for looking at my code :purple_heart:

Chunk size is known at compile time, it is should be the maximum size of value 100ko. I see no reason to use another chunk size?

The reason I’m using 10KB as the default is because of this documentation saying that values of 10KB performs better.

I’m making the chunk size customisable to allow for change if this performance characteristic changes in the future.

What I think is interesting to have is the ability to restart an upload.

Could you elaborate a bit on what you would like for restarting an upload?

I think I solved that by allowing automatic clean up of orphaned uploads using Store.DeleteUploadsStartedBefore, so you can just start a new upload without a problem. I also trying to comply with GDPR by only keeping uploads that was actually commits onto a transaction after upload using Store.CommitUpload.

I obviously haven’t really tried all of this out in reality as I’m building the foundation for an event sourcing library, and I’m currently just putting all of the foundational pieces together now.