I think the “issue” is that
dir.Sub(arg1, arg2, ..., argN) creates a tuple
(arg1, arg2, .... argN) and then encodes that tuple into binary. If you are already passing a tuple as the first argument to
dir.Sub(...) you are in essence creating a tuple with a single element that is itself a tuple. In your last examples, you were passing multiple arguments to Sub(…) and it produced the expected result.
You are having the same issues I had a few years ago, and this is because of a change that happened to the design of tuples, which made things ambiguous with the API at the time.
I think this is a common design flaw which is present in most bindings, and everyone will probably stumble on this when learning about layers.
For the first version of directories and subspace, embedded tuples were not supported, and in practice, using them would flatten all the items: so passing a key like
(1, (2, 3), 4) would become
(1, 2, 3, 4) once encoded or decoded. This was hiding the bug in the code, and everyone took a dependence on this “behavior” as if it was normal.
When embedded tuples were introduced, this immediately blew with the same result that you are seeing. For example, I had to do a breaking change in the .NET binding API, because I was not able to fix this issue due to overload resolution ordering issues between interfaces and generics in C#. My only solution was to change the name of the method that takes items to distinguish it with the method that takes tuples.
In the case of the .NET binding, the convention is now that methods like
EncodeKey(T1 arg1, T2 arg2, ..., TN argN) will create a tuple with all the arguments, while methods
Pack((T1, T2, ..., TN) tuple) will take a single argument that is the already-created tuple. I think that
dir.Sub(..., ..., ...) in the go binding corresponds to my
EncodeKey(x1, x2, ...) while you thought that it was
To illustrate with the .NET binding:
EncodeKey(1, 2, 3, 4) is the equivalent of packing
(1, 2, 3, 4) of length = 4.
Pack((1, 2, 3, 4)) is the equivalent of packing
(1, 2, 3, 4) of length = 4,
EncodeKey((1, 2, 3, 4)) is the equivalent of packing
( (1, 2, 3, 4), ) of length = 1,
Subspace, Encodings and Type Systems
I had to do a lot of work in the .NET binding to solve the issue of encoding and decoding keys, so that the application layer does not need to think about all of that, and also to prevent easy mistakes (like you found out the hard way).
I had a few discussions with @dave at the time about a “Type System” that could help Layer implementors deal with this. I implemented a few of the ideas in the .NET binding, in the form of
IKeyEncoder<T>, and other variants (
IKeyEncoder<T1, T2, ...> and so on).
These types handle all the business of encoding keys (single field or composite) into binary and back. They offer different guarantees: On implementations uses the Tuple Encoding and is used by default (for compatibility with the other bindings). Other implementations can use something like protobuf (more compact, but no ordering guarantees, etc…).
You can then combine a Subspace (a pre-computed binary prefix) and a Key Encoder to create a “Key Space” that does both: encode logical keys using some encoding scheme, and prepend the binary suffix automatically.
Some KeySpaces are dynamically typed, like DynamicKeySubspace while others are statically typed like TypedKeySubspace<string, int, Uuid128>, and the layer code can decide which one to use depending on circumstances, personal choice or risk tolerance
I don’t think that other bindings went that far, so you are probably stuck at the basic level of combining binary prefix (subspaces) with packed tuples yourself.