Unfortunately, this is one of the more painful areas in using the Record Layer on a live service with a mutating schema. My main recommendation (if it’s at all possible) would be to upgrade all of the services accessing the same store so that they’re using the same version of the meta-data, as most other configurations can result in data corruption.
What’s going on here is the store writes the “version” number of the meta-data into the database, and any modification to the meta-data is supposed to bump that meta-data version. We do this because accessing a store with a stale version of the meta-data can lead to data corruption. In the case of adding an index, if service A doesn’t know about an index that service B adds, then if service A adds or deletes a record, it won’t know to update the new index, which can result in new records being missing in the index (and therefore not showing up in queries) or deleted records having entries that are still in the index (leading to “missing record” errors if the index is used during a query).
All of these data corruption events should be isolated to read-write operations. I think it’s relatively safe to do a read-only operation, though you can’t really rely on indexes in that case. If you really want to ignore the indexes, you could theoretically use a FDBRecordStore
that you create by calling .build()
instead of .createOrOpen
. This skips the check that you’re hitting, though I do want to reiterate that skipping this check can lead to data corruption, so it should only be done if you’re really sure you’re not going to modify the database. I’d also argue against setting the meta-data version to Integer.MAX_VALUE
as written, as I believe it will write that meta-data version into the database (if the transaction associated with the store is committed), which means you’ll never be able to safely update your meta-data ever again.
So, the “right” way to do this is, unfortunately, a little non-obvious. There are a couple of different ways, but none of them are super off-the-shelf. If you have the meta-data in code, one technique, for example, is to create a separate “evolved” meta-data object, and push out a version of the code with both meta-data versions. You initialize both versions in a “meta-data manager” object, and then you can set that object up to select the right version. One way that we sometimes do this by making the “meta-data manager” object also serve as the “user version checker” (which checks the stores “user” version, which the Record Layer assigns no semantic meaning to but exposes to the user in case they want to guard some of their own features on this version). Because we always run the user version checker before the meta-data version checker during store opening, the user version checker can determine whether the store has been upgraded to the new meta-data yet or if it should continue to use the old one. Once you want to upgrade the meta-data, you push a version with just the new meta-data and it upgrades individual stores, and older instances now start using the newer meta-data as well.
However, that dance is pretty complicated, and so our longer-term goal has been to get it so that users don’t have to do that. The place we want to get to is a place where users can store the meta-data in the FDB cluster itself and load the meta-data from the database. While you can do that today with the FDBMetaDataStore
, it has the disadvantage that you need to reload the meta-data from the meta-data store every time you open the store. We have some plans on how we’d introduce a caching layer (with the proper invalidation mechanisms so that when the store is upgraded, the meta-data can be automatically reloaded), but we’ve been busy with other projects and so that project hasn’t been completed. See: Improve meta-data evolution and management · Issue #283 · FoundationDB/fdb-record-layer · GitHub