Value_too_large on reads?

We seem to see that just with doing range gets, if we get to rows with large values, we get value_too_large exceptions, basically along the lines of:

Causing: com.foundationdb.FDBException: value_too_large
! at com.foundationdb.FDBException.retargetClone(FDBException.java:41)
! at com.foundationdb.async.SettableFuture.getIfDone(SettableFuture.java:188)
! at com.foundationdb.async.AbstractFuture.get(AbstractFuture.java:28)
! at com.foundationdb.RangeQuery$AsyncRangeIterator.hasNext(RangeQuery.java:262)

Given that this is a read, is there something that allowed the write but actually failed on the read?

We are also pretty confident that our values are never larger than 100k so it’s odd that it’s failing on reads.

I wouldn’t expect this to be thrown on reads. We only throw value_too_large from four places in our (non-testing) codebase. They are in the atomicOp and set functions of NativeApi.actor.cpp and ReadYourWrites.actor.cpp, neither one of which should be used by a range read.

If my reading of retargetClone() is right, it copies the error from some other FDBException with the stack trace of the current location. So I don’t think that this is hasNext() throwing a value_too_large, I think this is value_too_large being re-thrown at hasNext.

Unfortunately, your files and line numbers don’t match up with master, so it’s difficult for me to work through the call stack that you have. I’d be interested to know what the future is that you’re calling .get() on, because it looks like that future was completed with an error that’s being re-thrown.

There were some semantics changes in 4.x around using transactions after they’ve been committed causing an error?

Yeah, we located the issue and it was a retargeted exception. For a while I thought it must be the client-side read cache that was throwing. Thanks for the looking.