FoundationDB .NET Standard

There is a an old .net framework client, but is there a .net standard version?

Is anyone using it with .net?

Just wondering what my best way forward is

It appears that @KrzysFR is putting work into modernizing the .NET bindings that he wrote. I’d probably watch that space: https://github.com/Doxense/foundationdb-dotnet-client

yes, I see it’s coming alive again https://github.com/Doxense/foundationdb-dotnet-client/issues/67

.NET Standard 2.0 support is definitely one of my goals!

I’m currently working on upgrading the old code into the newest version of the low level framework (Slice, Tuples, …) that was extracted and used in other projects and already is .NET Standard 2.0 compliant.

I’m also taking this opportunity to revamp the Subspace and Key Encoders API, after using it extensively for 3-4 years, it changed quite a bit.

One question I have, which will depends on who will start using this project: What kind of dependencies should I allow myself to take ?

Currently, the binding does not have any dependency, and currently targets .NET 4.6.1. This is trivial to port to .NET Standard 2.0 because the code is pretty straightforward, except the Interop stuff which can be made to work on linux/mac/win easily I guess. _Also, note that you can reference .NET 4.6.x assemblies from .NET Standard 2.0 projects, with only a build warning. I think this should work, though haven’t tested it myself yet.

When porting the newer code, I had to temporarily disable support for ValueTuple<..> and Span<T>/ReadOnlySpan<T> because they rely on external NuGet packages System.ValueTuple and System.Memory. I also had to remove some uses of ValueTask<T> which uses the System.Threading.Tasks.Extensions package.

These packages make my life a lot easier, but if I take a dependency on them, I MAY introduce versioning issues with applications that would also use other versions of these.

The ValueTuple and ValueTask<T> packages are pretty safe because their API look stable. I haven’t had any issues with upgrading any of these.

The System.Memory NuGet package, though, is pretty much in flux right now. It has changed quite a bit in the weeks preceding the release of .NET Core 2.1 preview. I’ve been following the .NET Design Review meetings (https://www.youtube.com/channel/UCiaZbznpWV1o-KLxj8zqR6A) and hopefully this should calm down once .NET 2.1 is out.

Anyway, I’m wondering what everyone thinks about this:

  • 1. Jump straight to .NET Core 2.1 with spans, tuples, and everything new. At the cost of adding dependencies on some NuGet packages that are moving at a quick pace for some.
  • 2. Use both the new and old stuff with tons of #if / #else in the code and have multiple versions.
  • 3. Target the lowest possible version of everything to maximize compatibility.

I’m in favor of 1. because I really want the perf improvements of spans. I’m not a fan of 2. because of the complexity involved in having multiple code paths to test and maintain. I understand people wanting 3. if they are stuck in older .NET framework versions, OR if they are already entangled in a nuget versionning mess with other libraries (I’ve been there).

I have migrated the project to .NET Standard 2.0 class libraries, see https://github.com/Doxense/foundationdb-dotnet-client/issues/68

It was pretty un-eventful and straightforward to do…

I’ve only been able to test on Windows (.NET Framework 4.7.x) so not sure if this works with .NET Core 2.x and on linux/mac.

Also, please bear in mind that the API is still in flux so don’t go on building 100K LOC code and putting them in production yet :slight_smile:

1 Like

Wonder if the sql parser will also be released & what it would take to port this project: https://github.com/jaytaylor/sql-layer/ to .Net Core?

Since that’s probably a multi man-year effort, I will let other tackle THAT problem :slight_smile:

Looking at other comments, it does not appear that the JAVA implementation is actively maintained either.

Sounds like it should be part of the server… as in why have to install another language just for the sql layer… if the server was PG protocol compliant out of the box then any .Net Core ORM + npgsql driver would work… think CockroachDB does this… so safe to assume your library will stay lower level?

I think bindings are expected to be the base library that enable other Layers or applications to be build on top of it (for each language).

If you need something more complex like a SQL Layer (or Document Layer), either you will have to re-implement it in your favorite language, OR people will need to decide on some REST API or wire protocol so that only one need to be written (in whatever language) and benefit everyone. In that sense, it would be as if PostgreSQL or MongoDB or Redis would use FDB as their internal storage engine, but you as the user wouldn’t know and use the usual client library and existing ecosystem for that db.

Specific to .NET, it would be as if you wanted to emulate a RavenDB server (using FDB as the storage engine), but still be able to use their own Client (or at least wire protocol) to connect to the cluster (not that I’m saying it’s a good idea!
:wink:
)

I think there are some discussions on this subject as well: Coprocessors or modules, SQL layer in FoundationDB, etc…

1 Like

I am biased toward #1. It makes down level .NET support harder but the performance and memory story is so much better. And I believe that 2.1 is going to be such a significant release that adoption will be pretty quick.

1 Like

This is going to be controversial, but to me this entangled-layers problem is a fundamental problem with the way FDB is designed. And that is that you often don’t actually want updating your index (or whatever) to be the responsibility of processes that write data. Consider a computed view that renders HTML. You might have many systems making changes to your database, written in multiple languages, but the actual HTML rendering code should live in one place, and be responsible for itself. You want to support both data changing (and thus individual HTML documents being re-rendered) and the HTML view function changing (causing a re-render of everything).

Other architectural models which I think are better aligned better:

  • Add an embedded language in the FDB server itself. When fdb sees a transaction, it runs code before the transaction is submitted which augments the transaction before submitting it to the database. As an example, any transaction which changes /blogposts/my-dancing-cat/tags is augmented with changes to /blogposts/bytags/ So, you attach a script which runs inside foundationdb itself which causes any transaction that writes to one subspace to have additional writes to another subspace. This is similar to what we already have, except you take responsibility for this work out of the client and put it in another system through which writes are sent. Ie, client write -> indexer service -> foundationdb -> client read
  • Instead of that, make the indexer consume the event stream from foundationdb. So you structure it as client write -> foundationdb -> indexer -> index store -> client read. You don’t need indexes to update synchronously with FDB transactions. So long as the index value stores the primary foundationdb version from which it was derived / last updated, you can use conflict ranges and retry semantics to guarantee that your reads across both stores are atomic. For this we need the foundationdb event log to become part of its published API. (It already exists, its just considered internal and subject to change without notice.)

Version 5.1.0-alpha1 of the .NET Binding is now available on NuGet!

It targets .NET Standard 2.0, and has some initial support for .NET Core on mac and linux, and the low level API (Slice, Tuples, …) has also been upgraded after 3 years of internal use in other projects.

See the release notes

The API is still marked as unstable, and I would really like to have some feedback on it ! (especially the VersionStamps which are new and somewhat controversial …)

On quick note on the versioning scheme: why jumped from 0.9.9 to 5.1.0 ? I though that it would be easier to keep the version in sync with the database, so that the question “what version of the binding do I need to be able to use new feature X of version Y?” will be easy to answer. Also, in the 3 years interlude, people may have build and used private packages with version 1.x, so in order to prevent any collision, jump straight to 5.x should do the job.

2 Likes

If there are people who were using the previous version (0.9.x) and are seeing a lot of source breaking changes, please get in touch and I will try to help you with the migration!

1 Like