gRPC binding / gateway

Hi,

Has there been any thoughts/experiments of having a more standardized RPC (such as gRPC) over the current specialized c-bindings? I can see this being both a layer/gateway but also something that could potentially fit into the core(?).

I’m a little curios to develop a POC gateway but before walking down such a path I wanted to check if there is any prior work or knowledge to either build on or disqualify such an initiative (I know close 0 zero about the internal logic of the c-bindings).

Background: As a go-developer I’m not a big fan of depending on cgo :slight_smile:

1 Like

One of the major changes in FoundationDB 6.2 is that we now use FlatBuffers as a serialization protocol. This would make an effort to reimplement the c client a bit easier…

That being said, writing your own client won’t be trivial. Our client contains a lot of logic and we rely on the fact that we can control the client (for example for load balancing). Some things in the c client are highly non-trivial (the ReadYourWritesTransaction implementation in the client is one of the most complex parts within fdb).

Because of this I wouldn’t recommend writing your own Go-client. Getting it correct will be a lot of work and it will be very hard and maintenance will be a lot of work.

That being said, I think we should eventually have some form of proxy that exposes a simple interface against which one can program. This can already be done as a layer (the document layer exposes a different network protocol). An alternative would be to have this maintained by fdb itself which might make things easier (however, it is not clear how service discovery would work in that case).

There is also some related discussion here, if you’re interested.

We make arbitrary changes to the wire protocol without documenting what they are and don’t maintain backwards compatilibity for them. I’m not sure using flatbuffers actually represents a meaningful improvement here, as we’re not even offering flatbuffer IDL files that other folk could import.

Having a piece of FDB act as a known endpoint that speaks a known protocol has been discussed as a future direction of Add support for "Read Proxy" role · Issue #1938 · apple/foundationdb · GitHub , so the idea is vaguely on our radar. If you’re interested in doing work towards that goal, then please go raise your hand and/or begin writing a design doc on the bug.

It would be fairly easy to autogenerate the IDLs… We (Snowflake) will also try to keep some amount of backwards compatibility (so that newer clients will work with older servers up to a certain version) and we’ll propose patches if we find issues with that.

But all that said: I think this is all kind of irrelevant. The client logic is way to complex to make a reimplementation in every supported language feasible.

One could also probably front FDB with a gRPC service that exposed something similar to the functions in our C API (probably using gRPC streams to correspond to transaction “sessions”, with the first operation in the stream also beginning the transaction, and a stream closure aborting any uncommitted work). Then this could be fronted with a standard load balancer and connected to in all of the “standard” ways with URLs/VIPs. (One could even add authentication at this layer and/or restrict operations to certain key ranges based on the authenticated user.)

In my mind, the gRPC servers use the existing C API (either though libfdb_c if implemented as a separate service, say, or through the NativeAPI directly if they are just other processes in the cluster) to handle all communication with the existing set of processes, but then they use gRPC to talk to clients. (Between Flatbuffers or Protobuf as the interchange format for gRPC I have no opinion.)

If one is willing to do some amount of work on the client side, one could probably take our existing bindings and make an API compatible thing that talked to this gRPC service instead of directly to the (current) FDB processes. This is getting somewhat in the weeds, but one problem I’ve often come across when thinking about this problem is what to do about the “set” operation, which the bindings assume are non-blocking. I think for that, one could (in the client) create a “client request queue”, and operations like set would simply enqueue a request, and operations like get, which demand RYW, would wait for all outstanding set operations to complete.

But before one embarked on this project, I think it would be important to have a clear idea why. (Your scientists were so preoccupied with whether or not they could, they never stopped to think about whether or not they should!) The most tangible benefit (IMO) would be that now as clients are using a protocol with a backwards compatibility story, client and server versions are no longer intertwined (ideally with tests to prove it). The other benefit would be that you no longer need to distribute a random C library, but can instead rely on the gRPC client libraries for most languages (which actually wraps a C library for some languages and reimplements everything for other languages), and connecting to a cluster looks a lot more like connecting to other services, which might be advantageous. The cost is an extra network hop, and also the complexity of getting client/server interactions right and monitoring these extra processes or the extra service. But this approach, unlike the ones outlined above, doesn’t require changing FDB’s server-to-server protocol (or requiring it to be backwards compatible), and it doesn’t require a rewrite of RYW or the other complex things the client currently does.

So, I don’t know. I’m somewhat ambivalent about this idea, as there are probably complexities I haven’t thought of. But I think something like it could be made to work.

FWIW, we’ve written a “storage server” that implements a gRPC K/V service. As a quick summary of what we have:

  • The main endpoint that we offer on the gRPC server is a bi-directional stream called RunTransaction - keeping the stream open corresponds to a single open FDB transaction.
  • On the RunTransaction bi-di stream clients can MultiGet/RangeGet/MultiPut/MultiDelete key/value pairs and the server will return responses along the stream.
  • All of the fancy FDB client logic (i.e. RYW, retry detection, etc) happens on the gRPC server.
  • The gRPC service is a very thin wrapper around FDB APIs and so has the same limitations: 5 second transaction duration, simple K/V API, etc…

The big two main reasons for wrapping FDB in the storage server is:

  1. We wanted to keep all of the “stateful” parts of our system contained within the storage servers. We can then schedule the storage server as a Kubernetes StatefulSet, and make all client microservices simple ReplicaSets. All of the messiness with syncing cluster files and connecting to IP addresses (instead of hostnames) goes away for clients.
  2. We write most of our microservices in Go, and so eliminating the cgo dependency meant that cross-compiling and developing across OS was much simpler (we target the cloud and also smaller ARM64 devices).

The downside is that it’s not as performant, but for us the simplicity in operation is well-worth any performance hits.

I would love to open source our work here, but it’s probably a bit early to do so… Happy to talk about our experiences here though!

Thanks for great feedback. Key takeaways; 1) there seems to be a few initiatives towards this goal but none public or completed; 2) there seem to be no known blockers (except maybe complexity for a full replacement).

I’m 100% on board the idea of wrapping the c-lib (not replacing it) and maintaining the state on a gateway.

Advantages i see in having an (g)RPC gateway.

  1. No C-dependency (easier build systems / get started)
  2. Lightweight client Libs
  3. Forward & Backward API stability
  4. Easier mocking / testing.
  5. Front the cluster with a public proxy that supports authentication. GPRC also makes switching between test/stage/prod env as easy as changing a url+credentials.
  6. I’m not a fan of the clusters-file for service discovery; this complexity should be hidden to consumers (imo). The cluster file also assumes that all clients reside in a trusted environment, which I think this is a sane design decision and delimitation but it doesn’t fit all deployment scenarios for which a proxy/gateway can be helpful.
  7. If the API aligns somewhat with existing KV protocols stores it might increase adoption

To me this is a layer very close to the core which can easily be extended in itself.

@jared2501 This is exactly what I had in mind :). Care to share some insights on the “not as performant” part? I would expect the extra roundtrips to add (some) delay but not limit throughput. If you don’t want to share the code, would you maybe share the proto-file(s) as a starting off point?

Edit: Clarified #6

Just for a little context about these design decisions, one of the main principals laid out for FoundationDB was that the surface of the core key-value store should be reasonably small with many of these types of features delegated out to layers (such as an RPC KV layer). There’s a little written about this here.

Yes; sorry for being vague, i totally agree with that separation and the overall design decision/priorities of fdb; will rephrase to clarify that is complicates usage in some cases :slight_smile:

Yeah the main issue is around extra round trips, I would agree it doesn’t reduce throughput. One thing to note is that the APIs are pretty tricky to design - the FDB bindings/APIs do a very good job at streaming data so that nodes don’t OOM, talking to the minimum amount of nodes, etc. Designing this into the gRPC APIs is tricky and we make a few trade-offs to simplify (i.e. we don’t stream back range requests, etc).