The design constraints are:
- There should be a simple documented limit rather than a slow decent into pain as value get larger
- The limit should be small enough that reading a single KV doesn’t represent a noticeable latency blip for the entire server
- The limit should be high enough to not be annoying to developers
- The limit should be high enough that the cost of reading the bytes associated with the value is significantly more expensive than the per-request overhead (this is to enable a low abstraction penalty for applications storing large data across many key-value pairs)
Given FoundationDB’s use of SSDs, 100K was chosen years ago as fitting those constraints.
A layer can easily build a “large-value” abstraction that supports seeking and streaming of a large values by storing, say, 64K at a time in multiple keys.
An interesting improvement to FDB would be to have it’s native API support streaming large values, but it would probably be a breaking API change.