Maintaining Data Integrity

Record am storing looks like this [CATEGORY, TYPE, VALUE, ID]… Primary key is on first three fields [CATEGORY, TYPE, VALUE]… I also have an index on ID field. I have a business use case where an ID can have only one VALUE for some TPYE’s. For ex, lets say an ID can have only one VALUE for type T1 then this is invalid [C1,T1,V1,ID1] , [C1,T1,V2,ID1] since ID1 will have two values [V1,V2] for type T1.

How can i ensure when concurrent writes happen only one of them gets stored and other is rejected. What i am doing today is, after inserting the row as part of transaction TX1, in a new transaction TX2 i do a read with causal read risky option to see if something else got inserted. If so i rollback the current transaction TX1 else i commit it. Will this work? If two instances are performing this concurrently is it possible for each tx to not know about the other tx’s commit and end up with both the records in DB?

Hm, I’m a little unclear here on what the exact desired constraint here is, but it seems like the right unique index here might work. In particular, the semantics of a unique index are that if you have one defined, it will reject any updates that result in multiple entries with the same values being saved. (You can also specify whether multiple entries with the “null” value for a field should be allowed by specifying the “null unique option”.)

Would a unique index on (type, id) be sufficient? That would reject any updates that would result in two records having the same ID if they already have the same type. (It may instead need to be (category, type, id) instead of (type, id) if a type is “really” specified by (category, type).)

In general, though, I would not suggest trying to maintain constraints with multiple transactions as then you’d have to handle things like what if the second transaction fails (and your constraint may temporarily be violated). The general way to ensure that these kinds of things are maintained is to check the constraint during the first transaction, and then the transaction processor will reject the commit (with a conflict) if the things read during that transaction are modified. So, in this case, the idea would be to check in TX1 if there are any other records with a different value for that id and type. If there aren’t, then you issue the commit, and then let the FDB transaction resolver fail the update if a concurrent write comes in that violates the constraint.

It’s possible I’m misunderstanding your proposed algorithm, though.