Hot write keys with atomic operations and constraints?

Hi all,

getting myself familiar with FoundationDB and I have a newbie question. We’re looking at FoundationDB usage for financial operations (general ledger)
Are there any patterns of implementing something like atomic add but with a certain constraints (e.g. value cannot go below some threshold)?
Tried doing some basic test on single machine, and the atomic add seems to be way more performant than simply reading and writing. We could probably get away with read/write for most of our keys, but the problem is that we’re gonna have some very hot write keys which would always receive constant stream of updates.

Thanks!

There’s not an atomic operation that would help with that, I don’t think. I’m assuming the semantics you’re looking for are something like: insert some data, and update a counter, BUT if the counter exceeds some value, then fail the transaction.

The issue here is sort of wrapped up in how FDB atomic operations work. The basic idea is that FDB atomic ops serialize into the transaction a write to a key, but rather than being a normal “set key” write, they contain a “mutation”, like, “add 3” or “set the key to the max if its current value and 7”. This mutation is considered by the transaction resolver to be a write without a read, and then the actual mutation is executed on the storage server after the transaction has already been committed. That means that the atomic operation can never do anything like “apply this operation IF some condition on the value ELSE fail the transaction” because the transaction is committed prior to the value being read. The closest thing we have is probably the COMPARE_AND_CLEAR operation, which takes a key and a value, and clears the key if the key’s current value matches the supplied value (which is usually 0, and can be used to implement counters that disappear when they reach zero).

I think the only way to get the (proposed) desired semantics with FDB is to read the key during the transaction and validate that the proposed increment doesn’t break your constraint, but that would result in all operations serializing around that one key, which could hurt performance. There are potentially ways you could address this by, doing something like having multiple limits with set maxima, and then randomly assigning each new value to one of those partitions, and then validating that the new value didn’t cause the smaller maximum to exceed its value, though that’s a bit much. Another approach would be to write data to a queue-like data structure, and then have another process go through and process those values. Effectively, this would be a mechanism for making the data durable, and then doing serial updates, but it may be more efficient than locking the counter key.

Another solution would be to allow the atomic operation to modify the value below a threshold and correct the value during the read. This may be a good idea because it would allow you to update the value very frequently. Updates can be blindly done using the atomic operation while reads would follow the logic below:

db := fdb.MustDefaultDatabase()
value, err := db.Transact(func(tr fdb.Transaction) (any, error) {
  value := tr.Get(key).MustGet()
  if value < 0 {
    value = 0
    tr.Set(key, value)
  }
  return value, nil
})