Request for feedback: tuple encoding bug

We’ve recently discovered a bug in the tuple layer encoding of the integers 2^64-1 and -2^64+1 and are interested in gathering feedback about some of the options available to address it. The bug is limited to the Python bindings, but there’s nothing about this that’s really specific to the Python bindings and we’d encourage anyone with an opinion to chime in.

The technical details of the problem aren't super relevant to the discussion, but I've hidden them in this section in case you are interested.

First I’ll provide some quick background about how integers are encoded in the Tuple layer. Each type in the Tuple layer has its own type code(s), and for integers there are 19 of them. There are a set of fixed-length type codes that represent integers of different byte lengths and signs (in order, there is a negative eight-byte type code, then a negative seven-byte type code, …, then a type code for 0, then a positive 1-byte type code, …, and finally a positive 8-byte type code).

In addition to the fixed-length type codes, there is a type code on either end for positive and negative variable length integers up to 255 bytes.

The problem is that in Python, the variable length encoding is used for the problem integers, whereas every other binding uses the 8-byte fixed length encoding.

The Python binding uses a different encoding for these two integers than the other bindings do, but it’s not completely incompatible. All bindings will be able to decode both encodings of the integers, and both encodings still sort correctly. This means that you’ll be able to unpack any tuple that you read from a binding that uses the alternate encoding. However, if you explicitly encode this integer using the Tuple layer, the result in Python will be different than in other bindings. This means that Python may not be able to interoperate with other bindings in a single database if multiple bindings are used to encode this integer.

Because of the difficulty of providing a general-purpose migration to correct this problem for the data in an existing database, our thought is that any change would involve allowing the user to choose which encoding behavior they want. Here are some of the ways we could do that:

  1. Add the ability to opt-in to the correct behavior in the Python bindings. The benefit of this is that it’s unlikely to cause any problems for someone already running the Python bindings, but the downside is that it requires new users to take a proactive step to get the desired behavior. Some users likely won’t and may end up with a database that has a binding interoperability problem that they don’t discover until later.

  2. Change the behavior in API version 610 to require users to opt-out of the new behavior if they want to maintain the old behavior. This allows us to phase out the incorrect behavior more effectively, but at the risk that an existing user fails to opt-out when they needed to. If this happens, keys that they expect to be in the database may appear to have gone missing.

  3. Change the behavior in API version 610 to require users to choose the behavior they want. This approach mostly avoids the problems of the previous two, but the extra step required for all Python Tuple users would definitely be a wart in the API.

It’s also possible we could transition between these states in different releases. For example, we could start with an opt-in and move to an opt-out, or we could require the user to specify for a while and then switch to an opt-out.

How do people feel about this? Would any of these options make you uncomfortable if we implemented them?

I think my vote is for #2 (new API version defaults to fix, and a separate opt out is available). Make sure the documentation is very clear about the change and that the safest upgrade path if you can’t migrate your data is to opt out. Users are really really really supposed to read release notes before changing API versions because of exactly this sort of thing, and I think it’s better for reckless upgraders to suffer than for the API to be permanently weird.

Obviously incompatible changes to the tuple layer are best to avoid where possible, but this one seems justified to me.

I’ve been interested in adding a section to the documentation (in the release notes or at least linked from them) that describe specifically how a new API version will affect existing code and what changes may need to be made when upgrading between API versions. It can sometimes be a bit difficult now to piece that out from the release notes which cover lots of other aspects of the release. If we go with #2, then perhaps that provides us a strong enough nudge to actually write this section.

1 Like

It’s not the only “wart”, the value == None or the generators that can not work in @transactional are also difficulties.

I agree.

I’ve had a similar issue with a change in the way some values were encoded. In our case, we were using a “draft” version of the spec for storing UUIDs and the header byte changed between 3.x and 5.x, which is a bit similar to the case above: our 5.x tuple library could read both old and new spec, but would only write the new spec (which broke some layers like indexes, counters, document stores with uuid keys, …).

We “fixed” the issue by exporting everything to a neutral format that is specific to our application (JSON), re-importing everything back after upgrading all the servers to the new version, and re-indexing everything. If the dataset is not very large, this can be done (took a few hours, but it could have been optimized if this was not a one-time only upgrade). If the data set can not realistically be converted to/from some neutral format in a realistic time, this solution may not be applicable.

I also thought about having a tool that would read all keys of the database, looking for old-style keys, and rewriting them one by one. This can work for some layers (especially if this does not break encoding), but this can be problematic where the tuple encoding is also used in values (that are pointers to other keys). This would need to have some sort of knowledge of the schema used by the application, and also the database would need to be relatively quiet during this time… Since we already had a backup system that exports/reimports JSON, this was not required in the end.

But maybe having a simple tool that just scan for bad keys (without mutating anything) could help people determine if they are affected or not? (presence of at least once key with the old format?. And if they find that they are not affected, then they don’t really need to do anything. If they are, then they can start looking for a migration strategy?

Also, is it known if both these integers are really used frequently in keys? Are we talking about a few entries per database? or a significant fraction? Integers in keys are usually sequence numbers (starting from 1) or indexed natural values (quantities, enumerations, …). Naively, I would expect finding these two integers mostly in values?

It’s hard to say anything definitive, of course, but I agree it seems true that many common use cases wouldn’t be affected by this. On the other hand, there are scenarios one could imagine where you might end up at risk. For example, anything that inserts a number based on something external (e.g. input from a user or the client of a layer provider) could be subject to this.

A simple tool that scans through key space looking for keys that are tuples and validating them would probably be easy to write. It may be hard to deal with keys that include but don’t entirely consist of a packed tuple (e.g. if keys live in a non-tuple subspace), but that may not be a problem for many users.

The main concern I have is that it’s tricky to run the check safely without turning off your clients while it’s being run. That’s because you’d likely want to wait to do the switch until after you’ve done the check, and a problem key could be inserted unnoticed any time between when you start the check and when you switch to the new encoding. I guess you could potentially try to first instrument your clients or the bindings to detect when such a key is being written and then carefully switch to the new encoding if neither tool finds anything.

I actually think this problem is somewhat subtler than one might think.

For one, there may be circumstances where one has a tuple-encoded thing as a binary blob within another key. For example, if one had a data model where the keys were encoded as:

(index_key, binary_encoded_primary_key)

Then this tool would need to know which columns could be packed tuples.

Another situation might be places where someone has embedded packed tuples in another serialization format (for example, within JSON). So, for example, one could imagine keeping raw FDB keys (which are tuple packed) as fields inside JSON which is then inserted directly as FDB values.

Hopefully, users know whether they are doing these things or not, so if they have to write the tools themselves, then they can correctly try to unpack the right things, but it probably depends on the data model, etc. So, I think a general purpose tool that doesn’t miss any edge cases might be a little too hard.

I think my concern would be, as these are the largest numbers (in magnitude) expressible with 64 bits, that someone might have tried to use them as “infinity” and “negative infinity” respectively. Like, if someone were indexing some integer value and filled in values with either infinity or negative infinity depending on whether they were interested in null values showing up at the beginning or the end of the index, someone could conceivably used these numbers.

Yeah I was worried about these numbers being used for this reason, but fortunately this also seems like a case where it would be known that these numbers are being used.

Yeah, I think this is an interesting example of “your key isn’t just a packed tuple” that I mentioned. And hopefully, those with keys of these forms would know who they are and would not rely on a tool like this if it existed.

I think perhaps the more interesting question regarding the proposed tool is whether there exists anybody who knows they would be affected by this bug and would be able to use it if all it did was scan the key space finding keys that consist entirely of packed tuples, reporting those that contain these two integers.

A small followup to this bug: I’ve added tuple encoding support in the javascript bindings for bigints. It has the caveat that because there’s only a single way to encode numbers in the tuple encoding, tuple.encode(1n) === tuple.encode(1). (So unfortunately tuple.decode(tuple.encode(1n)) === 1). But I figure that ok, and it matches the other bindings. (Its impossible for a bigint to “pop out” after encoding a normal javascript number).

Based on this thread it sounds like the behavior in the java bindings is considered most correct, so I’ve copied the java implementation’s behavior for these ‘problem integers’. (And ported the java tests to typescript to make sure).

It’d be nice to make a note about all this in the tuple spec - since we now have at least 3 compatible implementations for arbitrary precision integers in the bindings.

I think it’s good as a key. However, we have to be aware that BigInt and number are incompatible.

1 + 1n // Error
1 === 1n // false

Does the bug fix have already finished? If not, I vote for #2.

Yeah I know. It’s not ideal - the choice was either this behaviour (all small-to-medium ints in javascript get encoded as fdb integers) or make it so small integers in javascript get encoded as floats or doubles. I decided on this behavior because:

  • If you encode 1 in javascript and then decode it in python, it should come out as 1, not 1.0
  • If you encode 1 in python and decode it in javascript, it should come out as 1. Not 1n.
  • The tuple encoding for the integer 1 is much smaller than the tuple encoding for the double precision number 1. Most tuples with numbers in them will be small integers, and the small integer encoding is a better choice.

You can always override this choice by using tuple.encode({type: 'float' | 'double', value: 1}) if you want to force your numbers to get encoded as floats / doubles. This table has examples on all the different values and how they’re encoded.