Key-val sizes in the Record Layer

Hm, the feature alluded to in the paper there is just that singe records are split across multiple key-value pairs, so single records are not subject to the 100 kB “max value” size limit.

The “catch” is:

  1. The Record Layer will choose to split your record so that each value (except the last) is 100 kB in size (right up to the limit). It’s possible that the key value store would do better if the data were split across more, smaller keys; it’s also possible that 100 kB is optimal, but we haven’t actually done the work to test that.
  2. Records must be updated in a single transaction, and the work done updating the record is counted against your transaction usage. As a result, you probably can’t write a record larger than 10 MB. The Record Layer won’t enforce that explicitly, though, so you won’t get a “record too big” error, but you will get a “transaction too large” error.
  3. On more of a data modeling level, Record Layer reads and writes are done at the record level, and (with the exception of covering indexes, where the “required fields” can be satisfied by looking at index data), so if one writes large records, any time one needs to read the record, the Record Layer will load the full record from the cluster, and any time one needs to update it, the Record Layer will save the full record again. If you have relatively cold data, that may be fine, but if parts of your record are updated or accessed more frequently than others, it may make sense to split that out into its own record.

I alluded to this above, but just to be explicit, transactions in the Record Layer are still subject to the 10 MB limit (and probably should be limited to less than 1 MB in size).