How to improve connections count from client to server in go sdk or in c sdk

  1. I found that db will be created from a global map in go sdk
    when use db in multi go routine,too little connections can’t process too many Transactions
    how to improve connections count?

  2. i also found that fdbclient designed by a singleton thread mode ,can it perform well for multi-thread c++ http server(24 cpu cores)

I think the answer to both your questions is “you might need to run more processes”. The FDB client library does indeed have a single background thread that handles the network communication. If your application manages to saturate this thread, then you’re left at a point of needing to run more processes to continue scaling up.

If I understood your question right in (1), it sounds like your golang app is already saturating the network thread, and you might need to run more processes to be able to get better throughput. Depending on how much FDB-related work your 24cpu http web server is doing, you may or may not be in the same situation as well.

1 Like

If you haven’t already benchmarked it, don’t assume you need lots of connections based on experience with other databases. Most databases can only accept one concurrent request on a connection, or at best can pipeline requests but can only reply to them in order. FoundationDB uses an out of order async RPC protocol and can send lots of concurrent requests over each connection. And a client will open connections to potentially all of the servers in a cluster, not just to one of them.

I had already benchmarked the go apps,in one proceess it can perform transactions 1w/s and can’t improve by adding groutings,but two process can easily perform 2w/s,may be it can improve performance by add netwrok thread

multi process model is more expertly than multi thead model for our team,manage thread is easy in some conditions, etc. when i want to dispatch keys to different thread by hash,only need to bind a hash range for every thread,but i need to know how to use pipe/shared_memory and child processes management when i use mulfi process model to do that thing

and another reason is that we had designed and implement our servers for multi thread model,and we had done a lot of works for our server’s performance in multi thread conditions . The entire works contains impl libs,
network frameworks… it maybe cost too much to adapt it to multi process

would you like to realse binary protocal documents of FDB, I’d like to implement an events driven client not in a singleton design and can be easily to create instance for multi thread apps,I had tried to do that work by reading the source of current client’s sdk,it’s a little difficult for a FDB newbee :smile:

Sorry, I don’t understand what the unit of “w/s” is? If you have a benchmark-able application already, you might wish to look at top to confirm that you actually are saturating the network thread. I’m not disagreeing though, that once the limit is hit of what one can do on one network thread, it’s inconvenient that the only solution is to run more processes.

Unfortunately, releasing the “binary protocol documents” of FDB wouldn’t work quite the way that you would hope. There was a different thread that covers this topic in detail.

thanks for your quik response , w/s means ten thousand per second
as in a different thread discussed
client may should not only be wraper as a sdk,and should run as a standlone process
which others can access it from domain_socket/rpc/restful_api/shared_memory
FDB team may offer an official management architecture of the client process depends on their professional experience ,which other one can impl sdk and expand client’s processes both easily
Is it feasible?

Ten thousand read-only transactions or ten thousand read-write transactions?

If we were to offer a solution, I would hope that it would be something closer to allowing multiple network threads being run in the same process, much like what one would want or expect. The difficulty here is not only in multiplexing across multiple client threads, it’s that our testing is based around an assumption that the entire system can be accurately simulated with a single thread. So though I agree it would be nice, it’s likely not work we’re going to be able to prioritize anytime in the near future.

1 Like

ten thousand read-write transactions
and i don’t very clear about what’s the meaning of

entire system can be accurately simulated with a single thread

may lots of conditions cause the entire system slow down, buf offer a more flexible way for user’s to ajust may be helpful(or may be harmful :laughing: )
such as ajust network thread count/call back thread count/connections per thread may work in some particular conditions

Testing Distributed Systems with Deterministic Simulation is still probably the best overfiew of the FDB testing strategy.