Sharing the metadataVersionKey for multiple tenants

Suppose we have n tenants and one foundationdb database. Each tenant has a set of infrequently updated keys that need to be read in every transaction. To avoid read hot spots on these keys, clients use the value of the metadataVersionKey to determine whether their local cache is still valid, and writers promise to update the metadataVersionKey any time they change one of these keys.

The problem with this is that writing to one tenant’s metadata invalidates every tenant’s caches, and causes every in flight transaction to abort.

Some ideas for dealing with this:

  1. Verify the value of the metadataVersionKey, and then manually remove the read conflict on the metadataVersionKey. Add a read conflict range on just the keys you care about. This way at least you don’t abort your transaction if another tenant’s metadata changes.
  2. Have one key per tenant that you can use to check that tenant’s cache’s validity. If another tenant’s metadata changes, you only need to re-read one key
  3. Batch writes that require changes to the metadataVersionKey, so that you can change the key less frequently.

Do these make sense? Are there other techniques here?

FWIW, the way the Record Layer handles this is that we always read the meta-data version stamp key at SNAPSHOT isolation level:

We use this key in order to cache some configuration information that each client needs to load when they open a record store and should be maintained transactionally with the record store’s data. If the meta-data version has changed, then we need to go read the (no longer cached) key. If the value hasn’t changed, then we add a read conflict range only to the cached keys, so if the cached values change, then the transaction is failed, but changes to the meta-data version stamp key itself don’t result in transaction failures.

Because we’re only using this to cache one key per record store (which I believe should count as a “tenant” according to the post) with a relatively small value, I don’t think we’d gain too much from a separate “per-tenant cache key”, but I could see other use cases that have larger cached values wanting to have a separate “cache invalidation key” from their “data key(s)”.

I think a user could also implement (3) using our system, but as this value changes each time a record store is upgraded, the procedure would be to upgrade multiple record stores at once, which could be done. It doesn’t scale particularly well to many record stores, but if there are many record stores, then the need to cache (assuming we’re concerned about hot shards rather than request latency) is relatively low.

1 Like

Good topic. We’re not yet live with this but we are heading towards Option 1 + Option 2 in our deployment. Every time some tenant-specific metadata is updated we will bump both the metadataVersion and a tenant-specific key, which keeps the invalidation cost low for all the other tenants and doesn’t carry any overhead in the happy path.

Does FDB has tenant-specific metadata now?