We’re seeing intermittent exceptions when using record layer to persist records using ‘saveRecord/async’ since upgrading to 2.8.91.
Debugging the issue, it appears that the unpackKey happening during the
FDBRecordStore#loadExistingRecord from saveRecord returns a tuple with first element as null.
Our setup has setSplitLongRecords set to true, but I have verified that none of the records are actually split since we aren’t breaching the 100kb size limit for splits to happen.
Any ideas here what might be the issue ?
We had initially suspected rewriting the record is triggering this, but have seen cases for new record inserts as well.
Does your environment report LoggableException.getLogInfo? If so, what do the subspace and key there look like? If not, handling that might help narrow it down.
I meant getLogInfo from the RecordCoreException itself. It looks like that method doesn’t combine causes, so you’re only seeing the one just created.
Since the problem only occurs when using FDBMetaDataStore, I wonder whether you are storing that in the same (that is, overlapping) keyspace as the records. That happened here, so maybe there could be some explicit check for that.