The shortest example I can think of is if you have a document store that can add and drop indexes at runtime.
You need to maintain a list of which indexes are active, and one way is to read a key during every transaction that holds the list. That way any time an index is added or removed you will see it right away.
If every transaction has to read a single key, you will eventually overload the storage servers with that key. It also taxes every transaction with some small amount of latency (although you could probably hide it by optimistically assuming the schema is valid and checking it yourself sometime during the transaction later). Note that you still need to manage the backfilling and deleting of indexes in the background, you’ll just observe the state transitions thereof on all clients immediately.
The alternative is to somehow cache the schema. The goal is to operate the cache such that any client only has either the current version or the previous version of the schema. If you can maintain that, you can use the online schema change protocol from F1. This is possible in FDB but takes some amount of code every layer that wants online schema changes has to write. A bug in this code is almost guaranteed to create inconsistencies in the data, such as index entries that point to nothing.
This change sends a third type of version (other than read and commit), the “‘metadata version” of a transaction, back to the client when they begin a transaction. This lets clients cache the schema and invalidate it based on the metadata version being different than what they’ve got cached.
When a client detects a change, it can read the actual schema key and continue serving requests. This means only the transactions at that time need to read the actual schema key.