I implemented support for user defined types in the tuple layer of the .NET binding, but never really used it, probably because I’m using a statically typed languages where the caller is providing the exact type to deserialize, while in dynamic typed languages, the implementation of the tuple layer usually creates the types by itself.
By that I mean that I will probably call TuPack.Decode<int, long, double, string, SomeEnum>(...)
giving the exact type I want. This allows me to also define an interface ITupleSerializable
on custom types, or use dependency injection to inject deserializers for custom types. C# now also has better support for tuples and type deconstruction than in the early days, which would make this even easier.
In practice, I’ve never liked custom types that much, because they pollute the types themselves with custom serialization methods (what about if they also need to serialize to JSON, XML, or some other library), and also ties them to my particlar tuple implementation (there could be other libraries).
This also pollutes the application setup, and can be difficult when composing multiple components together (either collision on the type id, or need to call multiple “init” methods that have to hook their serializers).
And finally, this makes it impossible for tools or applications written in other languages to decode the content of the keys making it difficult to diagnose.
For simple custom types (composed of a few fields with basic types), I would prefer using embedded tuples.
For complex types I’d probably use a custom binary encoding anyway, which would be more compact than the tuple encoding.
Note that this is from the perspective of static languages. Things are probably dramatically different for dynamic languages. An example of that is the different approach taken to serialize versionstamps in tuples.