I’m a bit new to FDB, coming from a background of using Django and Flask with relational databases. I’m wondering if there is an established convention or best-practice for writing unit tests and integration tests that hit the database, and, in particular, cleaning up after those tests.
In the Django world, it’s typical to use an SQLite backend for tests and simply delete the .sqlite file at the end, or to use a separate database server, and use the ORM to remove all of the rows when a test completes.
Does Foundation have any equivalents to this? Right now, my crude, brute-force implementation is to just delete an entire subspace:
@fdb.transactional
def clean_subspace(tr, subspace):
del tr[subspace.range()]
I had some success by replacing FoundationDB with LMDB to emulate a distributed system in a single process (for demoing, local development mode and fast simulated tests).
Of course, this didn’t replace proper FDB-based tests (these still had to be run) but helped to establish a quick feedback. Especially, if you use ASYNC flushing for the LMDB.
Well, you could start up the database and clean it out between each test, I suppose, bu that seems pretty heavy weight. But cleaning out the subspace (or, if it’s against a cluster that you don’t have any real data in, the whole database: del tr[b'':b'\xff']) isn’t that bad a solution, IMO. If you’re using pytest, you could even set it as part of the set up for a whatever fixture provides the database object, and then it essentially happens automatically.
A trick that can work for many unit tests is to create a transaction, set the read version to 1, clear the entire database, do your unit test using the transaction, and then just don’t commit it. If you do this right, you don’t even need a database to connect to, since all reads can be satisfied locally by the fdb client and the writes are rolled back!