Transaction Log

I am evaluating the use of FoundationDB for replacing our in-house in-memory transactional blob store. Our current database has Transaction logs which gives us the following features.

  1. Transaction Audit
  2. Downstream systems sync

Transaction Audit: Users can view the transaction history in the client. The server keeps some latest transactions in memory for performance reasons and the older ones can be served from the TransLog file.

Downstream Systems Sync: The db has apis that can push/pull trans logs over network. These are used by the downstream systems which apply required filters to get notified about transactions to objects they are interested in.

Does Foundationdb have features that we can use to implement these features?

1 Like

Hi rahul,

  1. Transaction Audit

FoundationDB does not provide any mechanism for accessing the transaction history (it does store the last 5 seconds/5 million versions of mutations, but that’s more of an implementation detail.)

This could be implemented as a “layer”, or an abstraction built on top of the underlying kv store. You could for example never modify the keys you want a historical record of, but instead add the new value and update a pointer to it. The history of a key k might look like this:

/history/k -> 3
/history/k/1 -> value1
/history/k/2 -> value2
/history/k/3 -> value3

So to read k you would first read /history/k and get 3, then read /history/k/3. (You probably want to store the binary representation of the version numbers or otherwise make sure it sorts by version number lexicographically). You might also want some way to encode that the value for the key is “cleared”, i.e. the key is not present.

Another approach would be to log all the writes/clears you do in each transaction in a separate keyspace within the db itself. You want to update the log in the same transaction as your actual mutations. This is roughly how backups work.

See (in particular the transactions enable abstraction section) for FoundationDB’s philosophy here.

  1. Downstream systems sync

This might be a good fit for the watch feature, where you can watch for changes on a particular key. It’s more efficient than polling (although I believe the implementation does fall back to polling if there are too many watches registered)

Actually, FoundationDB does have a mechanism to automatically log all mutations to a range of keys, used to implement the backup and DR tools. However it is generally not recommended for applications to use it for change monitoring, since new versions of FoundationDB will generally change the format, so such applications won’t enjoy FoundationDB’s otherwise excellent backward API compatibility.

I and others discussed some similar questions about applications interested in history in this thread:

Thanks Dave,
I believe the mutation logs is very close to our requirment. We are not looking for history of certain keys but a sort of a Redo Log or an event log on the entire db which is very similar to the changefeed requirment. If the DR / backup endpoint could treat the mutations as data we should have what we want. Having a trans log layer would double the transaction size and increase write latencies.

If this is a generic requirement I will be glad to contribute.