Go lang AddReadConflictKey AddWriteConflictKey

I have a counterKey with value 0. A function will read it, process it and at the end increment the value. Even when the function is concurrently executed several million times, each time when the counterKey is read the value must be unique. In other words, only after successful completion of one instance of the function, the second concurrent call of the function must execute. I see, it is not wise to only rely on func (d Database) Transact(f func(Transaction) (interface{}, error)) (interface{}, error) for this and I have to use func (t Transaction) AddReadConflictKey(key KeyConvertible) error and func (t Transaction) AddWriteConflictKey(key KeyConvertible) error. However, I am unable to understand how the Read and Write Conflict Key works. Can you please explain it to me on layman terms with a Go Lang example?

Tagging few great minds, who have helped me in the past. @gaurav @alexmiller @ThomasJ @alloc

_/\_Thanks in advance.

In order to read-modify-update a counter, the transact method (transaction retry loop) that you’ve linked will work without needing explicit read and write conflicts. However, as you are concurrently updating a single key concurrently from multiple transactions, there could be lot of conflicts between transactions.

Conflict Ranges (read/write conflicts) are used to determine if a transaction conflicts with another concurrent transaction - and guarantee serializable isolation. When one reads a key or key-range in a transaction, these keys/key-ranges get implicitly added to the transaction as read-conflicts; similarly, when a transaction mutates a key, it gets implicitly added to the transaction as write conflict.

(this behavior can be changed using snapshot transactions, and other transaction options)

At the time of commit, fdb (proxy and resolver) verifies that none of the read conflicts of the committing transaction overlap with 'write conflicts` from another transaction that committed after the start of the given transaction.

You can read a bit more about this concept here and here

Hm, the right solution to this kind of depends on the semantics you want from this key.

There’s more on how to use read and write conflicts in @gaurav’s answer, but I think it might be worth considering the requirements here and seeing if there’s a better way.

Unfortunately, there’s not a great way to get exactly what you want from FDB if you require a key that produces a (1) unique value you can read during the course of the transaction that (2) monotonically increases by 1 each time it is written and (3) is scalable (i.e., supports concurrent operations). I think you can get any 2 of those 3. (I think you actually can get all three if you try really hard, but it requires maybe more write-amp then you’d like and more specialized logic.)

If you want (1) and (2), you can just read the key, increment its value by one, and then write it back to the database. Then when the transaction commits, the resolver checks that no one else has written that key since that transaction read it (unless one uses snapshot isolation level), so as long as your transaction commits, it is guaranteed that you got a unique value (and if it doesn’t commit, then it’s like it never happened). However, by its very nature, this means that no two transactions can operate at the same time, as they will conflict on this key.

If you want (2) and (3), you can use the atomic ADD operation: https://godoc.org/github.com/apple/foundationdb/bindings/go/src/fdb#Transaction.Add When the transaction commits, it will increment whatever value is currently stored in the database at the given key by the passed parameter (so, you could pass it 1). Then the value of the key will essentially be equal to the number of times it’s been written to. However, the write is entirely blind, and you don’t have access to the value of that key before the update (and after the update, it might have been updated by other transactions). The Record Layer uses this to maintain, for example, and index of how many records are in a database, and it works well for that, but it doesn’t work if you need the value back.

If you want (1) and (3), you can use versionstamp operations. See: https://godoc.org/github.com/apple/foundationdb/bindings/go/src/fdb#Transaction.SetVersionstampedKey and https://godoc.org/github.com/apple/foundationdb/bindings/go/src/fdb#Transaction.SetVersionstampedValue Those operations let you write the database’s commit version (with some other disambiguating bytes) into the database. This value is guaranteed to be unique for each transaction, and it is monotonically increasing (though not by 1 each time; it can go up essentially an arbitrary amount between transactions), and though you can’t inspect this value while the transaction is ongoing, you can get the value after the transaction completes, and you can use those two methods to write the value of the versionstamp into database keys and values.

There is, kind of, a way you could make this work by combining a few of these methods. For example, one can do something like with each operation, write a value (using a version stamped key) that essentially writes an item into a queue. This produces a universally agreed upon ordering. Then, one reads the value of some global counter key, and then assign a counter value to each transaction based on its order in that queue. However, this is probably more work than its worth, and it leads to a fair amount of overhead maintaining this queue.

_ /\ _ thank you for the quick response. You guys are the best. I cannot reveal the entire logic. However, below is the gist of what I want to achieve.


// idempotent
// user total will be zero if this function has never run
// this function must only run once for every user
userTotal := util.BytesToInt64(tr.Get(userTotalKey).MustGet())
if userTotal > 0 {
  return
}

// read
globalTotal := util.BytesToInt64(tr.Get(globalTotalKey).MustGet())

// set the new value
for i := 1; i <= 100; i++ {
  tr.Set(userSS.Pack(tuple.Tuple{userID, globalTotal + i}), []byte(""))
}

// atomically increment read value
plusHundred := util.ToBytes(int64(100))
tr.Add(globalTotalKey, plusHundred)

// atomically increment idempotent value
tr.Add(userTotalKey, plusHundred)