NodeJS bindings: Announcing 1.0.0 with directory layer support!

Hey! Author of the NodeJS bindings here. Moments ago I’ve pushed version 1.0.0 of the nodejs bindings to npm. New things:

  • It now supports fdb api 620 (:tada:)
  • We now have first class support for subspaces like the other APIs (:man_dancing:)
  • And most importantly (drumroll) The directory layer has been added (!!! :confetti_ball:)

I’ve also fixed a ton of bugs along the way, and dropped legacy support for ancient versions of nodejs.

Now you can do this flex

const fdb = require('.')
const db =

;(async () => {
  const books = await, 'books')

  const space = books.getSubspace()
    .withValueEncoding(fdb.encoders.json) // tuple works too!

  await['some', 'tuple', 'key', 123], {
    title: 'Reinventing Organizations',
    author: 'Laloux'

  const value = await['some', 'tuple', 'key', 123])

  console.log(value) // { title: 'Reinventing Organizations', author: 'Laloux' }

Its also totally interoperable with the directory layer in other languages, so you can mix, match and migrate to your heart’s content. And the bindingtester runs comfortably for hours without a hiccup.

Have a play - run npm install foundationdb and you should get the latest.

Full changelog here - I’ve also released a slew of 0.10.x point releases over the last couple weeks to fix various issues in the leadup to 1.0. (Some of those bugs have been kind of a big deal)


Hi Joseph

Can we say that this is the “official” library for NodeJS?
I’m referring to this repo
and this npm package

Thanks in advance,
Sergio Olvera

Sure, if you like.

Apple hasn’t officially blessed the binding code I’ve written, but I don’t think there are any other real contenders for the ‘official nodejs bindings’ crown amongst the community. There’s a couple other fdb libraries on npm, but as far as I know they all either wrap my library or they’re horribly out of date.

For raw, direct foundationdb access from nodejs, my foundationdb library in npm is the one you should use.

Right, at this stage there isn’t a formal process for making projects in the FoundationDB ecosystem “official” – particularly language-specific bindings that are community-maintained. That’s something I’d like to see – I think it could be a positive boost for the community by clarifying where to get started.

If there’s openness from Apple to go down this path, I’m happy to volunteer some cycles to put together a proposal for how this could work. Perhaps @josephg this is something we could discuss further, with input from Apple.

Absolutely. I’d be delighted to pursue that if there’s interest from folks at Apple.

I really like this syntax for simplifying the API of encoding keys!

In the .NET binding, it would look more like:

tr.Set(space.EncodeKey("some", "tuple", "key", 123), valueEncoder.Encode(new { Title = "..", Author = "..." }));

This means that I usually have to transport around both the tr, subspace and optional valueEncoder around in lambdas.

Something like tr.At(keyEncoder<X, Y>, valueEncoder<Z>) could return a thin wrapper around the transaction instance that will present a typed API for all the common methods (Set((X,Y, Z), Clear((X,Y)), etc…)

I may try a few things this week-end! :slight_smile:

A few questions:

  • usually I need to encode different things under the same subspace, And here your ‘space’ object wraps both a fixed key and value shape. Do you have another pattern for spaces that only specify the shape of a key? Or do you have to juggle with multiple variantes of the same “space” with different key/value encodings?
  • how do you handle encoding of partial keys? For exemple, with composite keys like (X, Y, Z), I frequently need to create ranges for (X, Y, *) or (X, *,*) which require a lot more methods for all the variants of number of arguments omitted, is the input the set of items, the tuple with the items, or a partial list of items, etc…
  • same thing for partial decoding? Usually when reading back a composite key, I’m only interested in the last one of two items (that contain a record_id with optional chunk_id, etc…) without wanting to do the work to decode the head of the key.

Thanks! I’m really happy with how it turned out.

The one downside is all the generic type parameters (KeyIn / KeyOut / ValIn / ValOut) I’m passing around everywhere (since its typescript). The Subspace, Directory, Database and Transaction classes have all ended up with those 4 type parameters because they’re subspace-like. But hopefully most users won’t need to look at that too much. The result is that user code is entirely typesafe with respect to the encoding though - like, you get compile errors if you specify the tuple key encoder and then try to pass a javascript object in as your key.

(And if you’re curious why I have a type for both KeyIn and KeyOut (and both ValIn and ValOut), its because you can make your encoders more forgiving of what they accept than what they return - by default you can pass keys and values in as Buffer | string but when keys and values are returned to the user they’re always buffers. Which is easier to work with for a user but it makes the code uglier … so :woman_shrugging:).


I mean, you can have a subspace which only specifies the key encoding - myDir.withKeyEncoding(fdb.tuple).at('books') will do it. If you don’t specify the value encoding, it’ll just take in values as buffers or strings, and you’ll need to encode & decode them yourself. Or you can make multiple variants of the subspace with each of the different value encoding methods you want, and then inside a transaction block you can rescope your transaction object; /; to interact with different parts / types of data.

I still don’t have a really clear sense of the best practices for using the API I’ve written. there’s lots of ways to do what you want, but I’m still not sure what to recommend to people when you do stuff like that.

When you say partial keys, remember the tuple encoder has a neat property that pack(['A']) + pack(['B']) === pack(['A', 'B']). So subspaces just cache the encoded bytes for the prefix - (X, Y) in your example. Then the user can specify a key of (Z) and we tuple pack that and then concatenate it to the prefix we saved earlier. It wouldn’t help if you wanted (X, *, Z) but for (X, *, *) you just make a subspace for (X) and for (X, Y, *) you just make a subspace for (X, Y).

And then the subspace strips the prefix back off on the way out. If you use a subspace with the prefix (X, Y) to fetch the items from (a) to (b), the keys that getRange returns will just be (a, …) not (X, Y, a, …).

Does that make sense? I’m not sure if I’ve actually answered your question.


const scopedDb = dbRoot.withKeyEncoding(fdb.tuple).at(['X', 'Y'])
// Which is the same as if you did (...).at('X').at('Y')
await scopedDb.set(['a', 1], val1) // sets key ['X', 'Y', 'a', 1]
await scopedDb.getRangeAll('a', 'b') // returns [[['a', 1], val1]]

The code in question is here if you’re curious, called from this mess in the Subspace class

Looking forward to upgrade to your directory layer in

1 Like