Implementing VersionStamps in bindings

I’m currently adding support to VersionStamps in the .NET Binding.

I’m a bit confused by some discrepencies between the documentation, code implementation, and behaviour I’m seeing. Also, I was not able to find a lot of documentation on this subject, and no concrete example.

10 bytes or 12 bytes?

UPDATED from answers below.

There are two flavors of Versionstamps: 80-bits and 96-bits long. The former is what the database understand, and the later is a client-side convention.

  • The 80-bits versionstamps are 10 bytes longs, and are composed of 8 bytes (Transaction Version) followed by 2 bytes (Transaction Batch Order). They are ordered and guaranteed to be unique per transaction. They are handled by the FDB_MUTATION_TYPE_SET_VERSIONSTAMPED_KEY and FDB_MUTATION_TYPE_SET_VERSIONSTAMPED_VALUE.
  • The 96-bits versionstamps are actually an 80-bit versionstamp followed by 2 extra bytes called User Version, so 12 bytes in total. These two bytes can be used if a transaction wants to insert more than one key. These two bytes are not seen by the database, and are just a convention at the binding level.

Both Python and Java bindings seem to have take the route of only exposing 96-bits versionstamps at the API level, and use a default User Version of 0. So these versionstamps will always be 12 bytes. The shorter 10-bytes ones are not exposed.

When the application wants to create a key using a versionstamp, it does not know the actual value yet, so the pattern is to have a placehold stamp, which then gets overwritten when setting a versionstamped key or value at commit time. These stamps are called incomplete stamps in some bindings.

The binding tracks the offset in the binary key or value where this placeholder stamp is located, and pass this info to the database, who then replace the bytes with the actual stamp. After the transaction has committed, the application can guery the actual stamp used via fdb_transaction_get_versionstamp.

So for example, if your layer uses keys like ('foo', <stamp>), using the Tuple Layer, the serialized binary key would look something like this:

('foo', <placeholder_stamp>) => < 02 'foo' 00 33 xx xx xx xx xx xx xx xx xx xx 00 00 >

note: currently, bindings will use xx = FF as placeholders but this could be anything

The prefix 02 'foo' 00 corresponds to the encoding of the 'foo' string, the byte 33 is the type header for 96-bits version stamps in the Tuple Encoding and is not part of the stamp. The 10 ‘xx’ are placeholders for where the actual stamp will be placed, and the last 00 00 are the user version (0 by default).

When calling the SetVersionstampKey method, you need to pass an additional value which is the offset in the key where the stamp is located. This is done by adding 2 extra bytes at the end, containing the offset in little-endian. These 2 bytes are not actually part of the key, and will be removed by the SetVersionstampedKey method.

Since the location of the stamp in our example above is at offset 6, the actual byte array passed to the SetVersionstampedKey method will be:

tr.SetVersionstampedKey( < 02 'foo' 00 33 xx xx xx xx xx xx xx xx xx xx 00 00 06 00>, 'hello world')

At commit time, the last two bytes are removed, and the 10 bytes are the specified offset are filled by the database. If the transaction commits at version 0x0123456789ABCDEF with batch order 0x1234, the key will become:

< 02 'foo' 00 33 01 23 45 67 89 AB CD EF  12 34 00 00> = 'hello world'

In practice, the batch order will usually be 0 or a low number (depends on the number of concurrent transactions).

The first 10 bytes are controlled by the database, and the last two bytes (‘00 00’) are controlled by the user,

Original question:

Custom Serialization Required

The way Java and Python deal with Versionstamps and tuples is a little bit … weird. I’m not a big fan of having to need a custom method to build tuples that contain a versionstamp (due to the need to get the byte offset where it starts). This does not seem to play well with other serialization mechanics (like for ex combined with subspace prefixes, or other custom encodings).

I was wondering if another approach would be better: using a specifc byte pattern, that is used to mark the location of a stamp (client side), and when such a byte array is passed to the VersionStampKey mutation, it would look for this pattern, and obtain the position that way. => no need for special code paths, any binary encoding scheme can simply output this pattern anywhere it wants, and it will be recognize at the last step.

Obvious problem is what if this pattern is used by random chance by the key itself? It cannot be something trivial like all zeros, or all FF. Maybe $VERSTAMP!$ or something like that?

There is precendents with - for example - the multipart content encoding (RFC1341(MIME) : 7 The Multipart content type) which explicitely define what is the expected chunk separator. Most implementations may choose a constant (or random) separator, and check if it is not contained in the message itself. If it is, choose another marker.

We could maybe choose to have a default token to mark the spot where a VersionStamp is, but have a mechanism (somewhere on the transaction? or as an extra paramter to the VersionStampKey helper method) to specify what was the exact token used.

// use default token
var key = AcmeLib.SerializeKey(("foo", 123, VersionStamp.Incomplete(42), 456)); // uses default token
// -> <'foo',123,$VERSTAMP!$42,456>
tr.VersionStampKey(key, ....);

// risk of collision
var key = AcmeLib.SerializeKey(("foo", 123, "Oh no, I have a $VERSTAMP!$ inlined", VersionStamp.Incomplete(42), 456), token: "ABCDEFGHIJ");
// -> <'foo',123,'Oh no, I have a $VERSTAMP!$ inlined',ABCDEFGHIJ42,456>
tr.VersionStampKey(key, ..., token: "ABCDEFGHIJ");

We could even decide to generate a random token per new transaction, and ensure that it does not happen twice in the same key. If it does, then the transaction would fail, retry (with a NEW random token), and the probability that the next token would be also contained in another key of the same transaction would be very low.

db.Run((tr) =>
{
    var token = tr.GetStampToken(); // -> "Aoew!4='£K"
    var key = AcmeLib.Serialize(("foo", 123, VersionStamp.Incomplete(123), 456), token);
    // -> < 'foo',123,Aoew!4='£K42,456 >
    tr.VersionStampKey(key, ....);
});

fdb_transaction_get_versionstamp

UPDATED

This method can be used to obtain the actual value that the database will insert into the key (or value) instead of the temporary placeholder. This value will be an 80-bit value that is the same for the whole transaction. If the transaction needed multiple ids, the way is to use an 96-bit timespan, with the last 16 bits being a user-provided integer.

This method must be called BEFORE the call to fdb_transaction_commit and the Future will be resolved AFTER the transaction commits successfully (or fail).

An exemple of Java code:

This may have some impact on code that use async operations (.NET, Typescript with async/await, Java with CompletableFutures, etc…) especially when combined with retry loops (where the code does not manage the transaction itself, and in particular is not the one who invoke the commit method).

Example of patterns that will fails:

Some possible solutions for Java:

Original question:

1 Like

As far as the FDB core is concerned, versionstamps are 10 bytes: 8 bytes of version + 2 bytes transaction number within version.

As far as the tuple format is concerned, versionstamps are 12 bytes: 8 bytes of version + 2 bytes transaction number within version + 2 bytes operation counter within transaction

The latter format is obviously designed so that you can fill it in successfully with the help of the built in FDB operations. But FDB itself doesn’t know anything about the format of the last 2 bytes.

Does that help?

Yes. So what is the “operation counter within a transaction”? in all my tests, the 2 bytes at ofset 8 and 9 are always 0. What are the conditions required to see a non-zero value there?

At first glance it seems redundant with the user version (extra 2 bytes).

It is the same as the user version.

VVVVVVVVTTUU

8 bytes version (filled by FDB)
2 bytes transaction# within version (filled by FDB)
2 bytes “user version” (you should usually set this to 0 for the first versionstamped item you insert during a transaction, 1 for the second, etc)

The reason that the last 2 bytes can’t be filled in automatically is because you very well may want to insert logically-the-same versionstamped item in more than one index, in which case it’s critical that it get the exact same (12-byte) versionstamp in each place. But the API can’t distinguish that from inserting two different versionstamped items, in which case you probably want to give them different versionstamps to preserve their order.

1 Like

Hm, yeah, this might have been an instance where brevity was chosen instead of clarity. Maybe a 12 byte “versionstamp” should have been an ExtendedVersionstamp or something to indicate it is different from a 10 byte versionstamp? ¯\_(ツ)_/¯

This can be improved somewhat by the fact that it will be the magic string preceded by the versionstamp type code, which might make it less likely to collide (or, well, maybe not). We ultimately decided that a magic string was more error prone than the extra methods were bad ergonomically, so we went with what’s in the codebase now. To handle subspaces, we added methods to pack a tuple with a prefix so that the prefix length gets added in to the offset correctly. It doesn’t quite work for suffixes, but that seems to be less common.

What are your order of operations? The get_versionstamp method is somewhat weird, but you have to call the future before it is committed, and then that future will be ready only after the commit. So something like:

CompletableFuture<byte[]> vsFuture = tr.getVersionstamp();
tr.commit().join();  // blocking call to wait on commit
byte[] versionstamp = vsFuture.join(); // non-blocking call to get versionstamp

(But, like, in C# instead of Java.) I think you can get used_during_commit if you call getVersionstamp after commit rather than before.

Are you only ever running one transaction at a time? You will only see a non-zero “batch version” (we call it) if there are multiple transactions being committed together at a single version, which can only happen if there are concurrent commits. The easiest way is to probably fire off multiple transactions and then wait for all of them.

Just to expand on this a little, with versionstamps in particular, it’s often the case that you will want a forward index and a reverse index, i.e., (keyspace_1, key) -> version and (keyspace_2, version) -> key. There are a couple of reasons for this, but the most obvious is that as you don’t know what key you wrote if you only write the one with a version, then to remove that key, you have to somehow figure out what that version was. So you can either scan the keyspace_2 (slow) or look it up in the other index (fast). The other less obvious reason is that if you get a commit_uknown_result error, then if all you had were the index with versions in keyspace_2, it would be really easy to add multiple entries in keyspace_2 in subsequent retries by mistake. Having keyspace_1 around let’s you either detect that you are in a retry loop and not write it again (essentially letting you know that your commit succeeded) or, if index maintenance is done correctly, lets you clean up keyspace_1 and keyspace_2 within the retry loop. But all of that depends on the versionstamp having the same user version each time for a logical record within the database.

I have updated the original question about stamp size from these answers.

Ok so this is a bit confusing. I’m calling the getVersionstamp method AFTER the commit has completed, but I’m getting an error Operation issued while a commit was outstanding which seems wrong to me: The commit was already completed, so it was not “while” and there was no “outstanding” commit? Maybe the wording of the error message is wrong?

[…] but you have to call the future before it is committed, and then that future will be ready only after the commit

I’m having some issues with this API, because it will seem a little bit weird - from a .NET coder’s point of view - when dealing with tasks, AND it does not appear to compose well with retry loops and multiple layers.

My test creates and commits the transaction manually, but typical application will never do that and go through one of the retry loops (the db.run(...) in Java):


Task<IActionReuslt> SomeControllerMethod(....)
{
  //... check args ...
  await db.WriteAsync((tr) =>
  {   
     // traditional write operations
     tr.ClearRange(....);
     tr.Set(..., ...);
     // new Versionstamp API
     tr.SetVersionstampedKey(MAKE_KEY_WITH_VERSION_STAMP(), ....);

  }, HttpContext.Cancel);
  //...
  return View(....);
}

obviously, the db code would be inside some Business Logic class, and not inlined in the controller!

If the code wanted to obtain the actual Versionstamp, as well as some other result extracted from the database, both at the same time, it will look very ugly:

//...
Task<Versionstamp> stampTask; // out of scope
Slice result = await db.ReadWriteAsync((tr) =>
{
     var val = await tr.Get('SOME_KEY', ....);

     tr.SetVersionstampedKey(MAKE_KEY_WITH_VERSION_STAMP(), ..);
     stampTask = tr.GetVersionstamp();

     return val;
}, cancel);
Versionstamp stamp = await stampTask; // need an extra await here!
//...
var data = DoSomethingWithIt(result, stamp);
return View(new SpomeViewModel { Data = data, ... });

Having to hoist a task outside the scope and do an additional await looks weird in moden .NET code.

The retry loop could return the Versionstamp task alongside the result like this, but again it does not look nice:

(Slice result, Task<Versionstamp> stampTask) = await db.ReadWriteAsync((tr) =>
{
     Slice val = await tr.Get('SOME_KEY', ....);

     tr.SetVersionstampedKey(MAKE_KEY_WITH_VERSION_STAMP(), ..);

     return (val, tr.GetVersionstamp());
}, cancel);

Versionstamp stamp = await stampTask; // still need an extra await here

I think what the user would expect to happen is that the retry loops returns the Versionstamp directly not a Future:

(Slice result, Versionstamp stamp) = await db.ReadWriteAsync((tr) =>
{
     Slice val = await tr.Get('SOME_KEY', ....);

     tr.SetVersionstampedKey(MAKE_KEY_WITH_VERSION_STAMP(), ..);

     return (val, tr.GetVersionstamp());
}, cancel);

The inner lambda would have signature Func<IFdbTransaction, Task<TResult, Task<VersionStamp>>>, which is a mouthfull, The retryloop method wants to return a Task<TResult, VersionStamp>, and not a Task<TResult, Task<Versionstamp>> so some generic magic needs to be done.

It may have some effects on the overall API because, in .NET, you cannot have overloads whose signature only differ by the return value. So it would not be easy to have an overload of ReadWriteAsync(...) that returns plain results, and another one which also return a tuple with the resolved timestamp. You’d probably have to change the method name, or add some arguments to disambiguate.

I could have a ReadWriteWithVersionStampAsync<TResult>() => Task<(TResult, VersionStamp>) overload, but then it may cause issue when composing multiple libraries:

Let’s say in an HTTP request controller, the outer scope starts a retry loop, and then pass along the transaction to some business logic, which then calls into other Layers (Document, Blob, Index, …). If one layer is refactored somehow, and starts using versionstamps. It would have an impact on the outerscope (in the HTTP controller code) because it needs to know to call GetVersionstamp, before the commit, and extract the value after it has suceeded, and outside the scope of the retry loop. How do I pass back the actual versionstamp into the original layer (which has long since returned and be garbage collected).

It looks like you need dedicated transactions that are fully handled by the Layer code, and will not be able to compose with other layer inside a single transaction ?

I may be misunderstanding what you’re saying here, but I think this is a good illustration for why get_versionstamp works the way it does. If one of the layers needs the versionstamp but isn’t responsible for committing, it can call get_versionstamp anyway and the returned future will be set when the transaction ultimately commits at a higher level.

If the layer is going to use the versionstamp of a transaction, it has to be around in some form after the commit completes. If it’s not, what’s the use-case for needing it?

You’re right, but I guess the issue comes from how you would write this using using idiomatic .NET with async/await.

Let’s say I have a very simple WEB API controller, that create versionstamped keys but doesn’t need to know their value. This is nice and simple with the current API. note: I’m assuming that encoding of version stamps at the tuple layer is magical and just works. outside the scope of this sample

public class SomeApiController
{
	#region Stuff..
        // all initialized from the Web Application via HttpContext and DI...
	private IFdbDatabase Db;
	private CancellationToken Cancellation;
	private IDynamicKeySubspace Location;
	private string DoSomethingWithIt(Slice data) => "hello";
	#endregion

	// REST EndPoint
	public async Task<SomeResult> SomeRestMethod(Guid id)
	{
		// here is the "business logic"
		var data = await this.Db.ReadWriteAsync(async (tr) =>
		{
			// read a key
			var val = await tr.GetAsync(this.Location.Keys.Encode("A", id));

			// create stamped keys
			tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(1)), Slice.FromString("some_value"));
			tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(2)), Slice.FromString("some_other_value"));

			return val;
		}, this.Cancellation);

		// return
		return new SomeResult { Foo = DoSomethingWithIt(data) };
	}
}

I would NOT want to write something like this:

await this.Db.ReadWriteAsync(async (tr) =>
{
	// read a key
	var val = await tr.GetAsync(this.Location.Keys.Encode("A", id));

	// create stamped keys
	tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(1)), Slice.FromString("some_value"));
	tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(2)), Slice.FromString("some_other_value"));

	tr.GetVersionStampAsync()
	  .ContinueWith((t) =>
	{ // this runs somewhere on the ThreadPool, at any time in the future!
		var stamp = t.Resut;
		DoSomethingWithId(data, stamp);
	}); // => if it fails, nobody will know about it!
);
}, this.Cancellation);

The Task continuation runs on another thread, maybe later, long after the HTTP context has been collected. And also it as no way to send it back to the client and could throw exceptions into nowhere

Now if I want to pass the actual versionstamp outside the scope of the retry loop and back to the controller while the HTTP context is still alive, I could try to change it like this, which is at least still using async/await:

// REST EndPoint
public async Task<SomeResult> SomeRestMethod(Guid id)
{

        // first part of the business logic (that talks to the db)
	(Slice data, Task<VersionStamp> stampTask) = await this.Db.ReadWriteAsync(async (tr) =>
	{
		// read a key
		var val = await tr.GetAsync(this.Location.Keys.Encode("A", id));

		// create stamped keys
		tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(1)), Slice.FromString("some_value"));
		tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(2)), Slice.FromString("some_other_value"));

		return (val, tr.GetVersionStampAsync());
	}, this.Cancellation);
	
	// need another await
	VersionStamp stamp = await stampTask; // <-- this is ugly!
	var foo = DoSomethingWithIt(data, stamp); // hidden in here is the second part of the business logic

	// return
	return new SomeResult { Foo = foo };
}

But the actual business logic is split in two: the retry loop is doing the first part of the job, but then the actual “finish” is in this DoSomethingWithIt, which has to know that I used two stamps with user version 1 & 2. Also, the outer controller code has to act as the middle man, and also do the task resolving to get the stamp. => this is very tied to the implementation (using FoundationDB) and may not be easy to abstract away.

After playing a bit with it, I think the ideal would be to add an onSuccess handler on the retry loop logic, that gets called once the transction commits, and is passed the result of the inner handler, plus the resolved versionstamp. It can then consume and post-process the stamp.

// REST EndPoint
public async Task<SomeResult> SomeRestMethod(Guid id)
{

	var foo = await this.Db.ReadWriteAsync(
		handler: async (tr) =>
		{ // this part runs inside the transaction, and can be retried multiple times

			// read a key
			var val = await tr.GetAsync(this.Location.Keys.Encode("A", id));

			// create stamped keys
			tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(1)), Slice.FromString("some_value"));
			tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(2)), Slice.FromString("some_other_value"));

			return val;
		},
		success: (val, stamp) =>
		{ // this parts runs at most once, after the transaction has committed succesfully.
			return DoSomethingWithIt(val, stamp);
		},
		ct: this.Cancellation
	);

	// no additional fdb-specific logic here!

	return new SomeResult { Foo = foo };
}

Everything stamp relative is handled inside ReadWriteAsync(), and the result is the complete post-processed thing.

If I refactor this further, then no more fdb-logic is visible inside the controller itself

public class SomeApiController
{
	#region Stuff..
	private IFdbDatabase Db;
	private CancellationToken Cancellation;
	#endregion

	// REST EndPoint
	public async Task<SomeResult> SomeRestMethod(Guid id)
	{
		var engine = new MyBusinessLogicEngine(/*...*/);

		var result = await engine.DealWithIt(this.Db, id, this.Cancellation);
		// no additional fdb-specific logic here!
		return new SomeResult { Foo = result };
	}
}

// library in a different assembly somewhere
public class MyBusinessLogicEngine
{
	private IDynamicKeySubspace Location;

	public Task<string> DealWithIt(IFdbDatabase db, Guid id, CancellationToken ct)
	{
		return db.ReadWriteAsync(
			handler: async (tr) =>
			{ // this part runs inside the transaction, and can be retried multiple times

				// read a key
				var val = await tr.GetAsync(this.Location.Keys.Encode("A", id));

				// create stamped keys
				tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(1)), Slice.FromString("some_value"));
				tr.SetVersionStampedKey(this.Location.Keys.Encode("B", tr.CreateStamp(2)), Slice.FromString("some_other_value"));

				return val;
			},
			success: (val, stamp) =>
			{ // this parts runs at most once, after the transaction has committed succesfully.

				// second part of the business logic
				return "hello:" + val + ":" + stamp;
			},
			ct: ct
		);
	}

}

But now, the method in the Business Logic class takes in a Database instance, and not a Transaction, so it cannot compose well with another layer that could read/write some keys at the same time.

I’ve seen this issue happening a lot in the last few years: the web controller has the db and is supposed to orchestrate everything in a single transaction to reduce latency. But all the various other libraries underneath will try open their own transactions in parallel. Sometimes by laziness, but sometimes by necessity (like a Blob Layer that has to upload more than 10 MB, or here deeply nested code that needs execute after the transaction completes, but before the controller gets back the result).

So by “not composing” well, I mean that now code inside retry loops has to take the responsibility of handling the lifetime of the transaction, and/or interleave part of its code back up the callstack, while keeping all its own scope (arguments, variables, context objects) it allocated alive long enough.

I ran into a similar problem today. I am able to use Database.run() to get the versionstamp as it returns the CompletableFuture<byte[]> and then I can wait for it and get the value. However, if I use Database.runAsync() it gets pretty ugly but I think I found the pattern for it. Not really reporting a bug, just agreeing that it is can be a little painful to get the versionstamp out of there if you need it.

Works:

    byte[] versionstamp = open.run(tx -> {
      byte[] key = ds.get("test").packWithVersionstamp(Tuple.from(Versionstamp.incomplete()));
      tx.mutate(SET_VERSIONSTAMPED_KEY, key, "test".getBytes());
      return tx.getVersionstamp();
    }).get();

Deadlocks, for good reason:

    byte[] versionstamp = open.runAsync(tx -> {
      byte[] key = ds.get("test").packWithVersionstamp(Tuple.from(Versionstamp.incomplete()));
      tx.mutate(SET_VERSIONSTAMPED_KEY, key, "test".getBytes());
      return tx.getVersionstamp();
    }).get();

Cannot access closed object:

    byte[] versionstamp = open.runAsync(tx -> {
      byte[] key = ds.get("test").packWithVersionstamp(Tuple.from(Versionstamp.incomplete()));
      tx.mutate(SET_VERSIONSTAMPED_KEY, key, "test".getBytes());
      return CompletableFuture.completedFuture(tx);
    }).get().getVersionstamp().get();

OR

    byte[] versionstamp = open.runAsync(tx -> {
      byte[] key = ds.get("test").packWithVersionstamp(Tuple.from(Versionstamp.incomplete()));
      tx.mutate(SET_VERSIONSTAMPED_KEY, key, "test".getBytes());
      return CompletableFuture.completedFuture(tx);
    }).thenApply(Transaction::getVersionstamp).get().get();

Awkward but works:

    byte[] versionstamp = open.runAsync(tx -> {
      byte[] key = ds.get("test").packWithVersionstamp(Tuple.from(Versionstamp.incomplete()));
      tx.mutate(SET_VERSIONSTAMPED_KEY, key, "test".getBytes());
      return CompletableFuture.completedFuture(tx.getVersionstamp());
    }).get().get();

I think this is one of the expected ways that you would use a versionstamp. The versionstamp result cannot be known while the transaction is uncommitted, so the return value of a call to get the versionstamp is a future to be set after the commit succeeds or fails. I believe it’s the case that this future is set as part of the commit, so in that particular example (and all of Sam’s examples, assuming open is a database), the future will already be ready at the point where you are waiting on it. It seems possible to provide an API in the bindings that could accomplish the same thing in this case without requiring a second wait if you wanted (e.g. by having another version of the run loop that returns a versionstamp or something, though it could only work on a database).

Also, I don’t know if there’s a reason that get_versionstamp must be called before commit, but if not then perhaps it could be changed so that you could get the versionstamp future after commit as well. It doesn’t really help in these retry loop examples you and Sam gave, though, because you don’t have access to the transaction outside of the loop.

It sounds like you are trying to design for a case where you have a layer API that has some sort of request that takes a transaction (i.e. doesn’t do the commit itself) but wants to do work both in and after the transaction and not return to the root caller until it’s done with all of that. I agree that our FoundationDB client and our other bindings don’t provide that capability, so at least based on the current API you would have to design your layer’s API accordingly. For example, you may have to make multiple calls into the API or return a value from your request function that will signify once the request is fully complete.

Commit hooks could be useful in this and other scenarios, so it may be worth posing that as its own specific feature request.

I think one the confusing aspect is that the method returns a Future like most other operations on a transaction, but it cannot be resolved until after the commit Future has resolved, unlike any other operations. The only other similar thing I can think of are Watches, which outlive the transaction scope.

Also, there is already one property, the commited version, which only exists after the commit, but this one is not exposed as a Future: you call fdb_transaction_get_committed_version and get the result immediately (or an error if you called it too soon). Couldn’t we have a similar system for the stamps?

Maybe if requesting the versionstamp has some overhead when committing, it could require some option to be set, like AUTO_RESOLVE_VERSIONSTAMPS? Once the commit succeeds, then querying the versionstamp accessor would return the value (or an error) just like fdb_transaction_get_committed_version does today.

This would at least give a more straight-forward view of the workflow: you do your thing with the transaction (get, set, clear, …) and maybe versionstamp some keys. You need the actual value used, it will be something that you call after a successfull commit. Not half-before, half-after.

Like you said, if the Future is resolved at the same time as the commit’s Future, then it could be a simple value accessor, no ?

Then, for retry loops, the remaining issue to resolve, is how can we create a small window of execution that happens after the commit, but before the retry loop yields to the caller (and destroy the context).

This could be done with a onSuccess lambda executed after commit, or maybe you could register callbacks on the transaction instance itself?

I’m not fond of the later, because it can lead to bad-practice pattern (in .NET, probably similar in other languages) were the callback will create a new scope that will capture all the variables and state of the outer scope, and could keep a lot of objects and state alive for no reason that the GC cannot collect.

await db.ReadWriteAsync((tr) =>
{
     // outer scope that can allocate large keys and values (byte[])
     byte[] evil_buffer = new byte[100_000_000_000]; // will be captured by inner scope!
     // serialize keys, call tr.SetVersionStampedKey(...), etc...

     tr.OnCommitted((state) => 
     { // inner scope

          var commitVersion = state.CommitVerison;
          var stamp = state.VersionStamp;
          // Do something with it!
     }
}, ...);

The inner scope will capture all the variables in the outer scope, so the evil_buffer may be kept alive for more time than required (GC cannot guess if it is still alive). This is a current limitation of the Roslyn .NET compiler that merge all scopes inside a method into a single container that aggregates all the state in the heap in the same instance. Not sure about Java or other

The only way I know out of this, is to extract each inner scope into a different method, and call them. This leads to broken code with pieces of scopes everywhere.

I’ve been bitten by this so many times, maybe that’s why I’m so uneasy with this API pattern! :slight_smile:

I have a PR opened with my current work in progress here https://github.com/Doxense/foundationdb-dotnet-client/pull/72

I’m trying out an alternative way of serializing of versionstamps, without active support required from the encoders (tuple layer, etc…), using the random tokens idea I described above. It looks like it will simplify things a lot.

In most (all?) of the examples posed in this thread, there was no transaction object to make this call on after the commit because of the use of the retry loops.

That’s true, I was thinking that the retry loop machinery would be able to do this automatically.

My current thinking is that using one the the vesionstamped atomic operations would set a flag. Once the transaction need to be commited, and if this flag is set, the retry loop would also request the versionstamp as well, and hide the task internally. The result would then be exposed to the caller via some mechanism, either via a second ‘onSuccess’ lambda that is invoked with the result of the loop body plus the resolved stamp, or via some other way (shared ‘context’ object? a fat ‘result’ object with multiple properties?)

If getting the resolved versionstamp from the transaction is cheap enough, I can do this automatically even if the caller doesn’t need it. If this has a non-trivial cost, then I would need to add some setting or option to trigger this.

If the only cost is having to create a Future handle, then I could maybe try to optimize this case by delaying the allocation of all the interop machinery in the binding (only alloation the FutureHandle, not the tasks and callbacks).

This could make it so that I can use the existing low level C API, and attempt to guide users of the .NET binding into patterns that are less prone to allocations and potential deadlocks.

I would not want to artificially slow down the very fast path of code that just wants to add a message on some queue (using versionstamps) and that doesn’t care about the stamp itself. If the overhead is a couple % who cares, but if it is more, then I really need to make this opt-in.

Ah, I see. I thought your request was for a change to the C API, so my comment was in regard to that. Certainly the bindings can provide other features on top of that to make various patterns easier. I don’t think that there’s much extra cost to requesting the versionstamp needlessly, as it appears that the call just returns a future that’s being set regardless.

For what its worth, this is all quite fun with the new nodejs bindings too. I suspect that with the introduction of promises & async/await the new node bindings are going to be similar to the C# bindings in many ways.

Deadlocks:

  const stamp = await db.doTransaction(async tn => {
    tn.setVersionstampedValue('x', Buffer.from([1,2,1,2,1,2,1,2,1,2]))
    return tn.getVersionStamp()
  }) // DEADLOCKS

… When an async function returns a promise, it automatically unwraps the inner promise before returning. Thus that code waits for the tn.getVersionStamp() promise internally, which deadlocks.

But you can trivially avoid that behaviour using awful hacks, like wrapping the stamp in an array:

  const stampArr = await db.doTransaction(async tn => {
    tn.setVersionstampedValue('x', Buffer.from([1,2,1,2,1,2,1,2,1,2]))
    return [tn.getVersionStamp()]
  })
  const stamp = await stampArr[0] // WORKS

I’m not sure what the best approach is here. JS might following what you end up doing in the C# bindings.

I don’t think that switching to have a synchronous get_versionstamp() after commit would actually make any of these issues easier. If you do want that, I think you can have it by just blocking on the future after the transaction commits; I think that is in fact guaranteed to complete quickly enough to tolerate for even a single threaded event loop.

I think the composable way to build operations with versionstamps is roughly like this (apologies if my hand-written C# doesn’t compile)

public Task< (LogicalResult, Task<FullId>) > doThing( DatabaseOrTransaction dbtx, Guid id ) {
    return dbtx.ReadWriteAsync( async (tr) => {
        var read = tr.GetAsync( this.Location.Keys.Encode("A", id) );
        tr.SetVersionStampedKey( this.Location.Keys.Encode("B", tr.CreateStamp(1)), id );
        LogicalResult lr = computeLogicalResult( id, await read );
        var fullId = computeFullId( id, tr.GetVersionStampAsync() );
        return (lr, fullId);
    } );
}
private async Task<FullId> computeFullId( Guid id, Task<VersionStamp> stamp ) {
    return new FullId { part1 = id, part2 = await stamp });
}

Of course you can make computeFullId() an async lambda if you don’t want to name it.

You should be able to combine multiple things like this in a transaction, including doing them in parallel if they don’t otherwise conflict. You can easily call it on a database as well. The logic for exactly what the versionstamp means (and even that a versionstamp is used at all) is safely located inside doThing(). The caller is responsible for not waiting on the Task until you are definitely outside a transaction context, but that is just a fundamental requirement of using versionstamps (or watches, which are very similar). And for the same reason I think it is kind of fundamentally needed for the return value of an operation like doThing() that you want to be composable to have two parts in some sense - one that you can safely use inside another transaction that is composing doThing(), and one that can only be accessed outside a transaction.

@alloc What is the impact of #242 - Unify SET_VERSIONSTAMPED_KEY and SET_VERSIONSTAMPED_VALUE API and #148 for bindings?

Looks like, depending on the API version, we will have to specify the offset as 2 or 4 bytes.

That’s right. If you are a binding maintainer and have decided to implement versionstamps in tuples, the suggestion would be to have it choose whether to add two or four bytes based on the API version. Here’s the line where that’s done in the Java bindings: https://github.com/apple/foundationdb/blob/f3093642b3c1babe93aacbaee8f60b4008662d52/bindings/java/src/main/com/apple/foundationdb/tuple/TupleUtil.java#L583

The benefit of this change is that once that’s done, the same code can be used to encode data for SET_VERSIONSTAMPED_KEY mutations and SET_VERSIONSTAMPED_VALUE mutations rather than having one mutation take two bytes and the other four. (The alternative alternative would be to have SET_VERSIONSTAMPED_VALUE mutations have a weird limitation that meant that you could place a versionstamp only in the first 65 kB of the value.) I don’t think it should be too painful, but maybe I’m wrong.