FreeBSD support for FoundationDB

I would like to ask about FoundationDB support for FreeBSD. I’ve been able to build it and merge in changes from the official port into a version I’d been working on from earlier to provide support for FoundationDB on the FreeBSD platform (I’m running FreeBSD 11.1-RELEASE). I have a PR ready to go and have done some basic tests. Work so far is located here. Python has been enabled so far still have yet to explore the other bindings on FreeBSD. Let me know what you think, thanks!

1 Like


Once this is merged I’d like to get the node bindings working on FreeBSD too - Its my server OS of choice!

Thanks Seph, I also wanted to follow up with the reasoning beyond porting to an alternate OS. I have been working on an HTTP/2 based stack for some time now and originally had plans to use SQLite…until FoundationDB was open sourced. Specs on ZFS and FreeBSD utilizing SQLite were very impressive and got my attention immediately. I have built this using the FreeBSD 11 kernel to encompass upstream features from Netflix and their contributions to sendfile(2) as well. NGINX collaborated on this as well to do some pretty cool things with async I/O. The GRPC gateway was recently completed to work alongside NGINX and so far I’ve gotten this to work inside this same FreeBSD environment. I tend to use the right tool for the job whether it’s Apple, Google, Microsoft, Linux, or BSD and in this case FoundationDB seems like a great technology to include in this system I have been planning for ~2 years now. It would be great to share this with the open source community at large to see what they think as well.

1 Like

I’m happy to accept PRs for things you needed to fix to get working on FreeBSD.

I’d highly suggest running a large number of simulation tests on FreeBSD before considering it working and finished. It wouldn’t be surprising if there’s something platform-specific that breaks determinism or somehow causes test failures. See this thread for a bit more information on how to do so, if you haven’t already.

Alright, sounds good to me and thanks for the info on running tests. I’ll be sure to start on those very soon once I get settled here.

I’m not actually sure how to best validate that a new operating system port of FDB is robust. Simulation testing is not a bad idea, but much of the really OS dependent stuff is not thoroughly tested in simulation - simulation wouldn’t, for example, be able to tell you that fsync() on OS X doesn’t do what we expect by default.

Once we had a physical power failure testing environment which could run a diversity of OS and hardware. It was an expensive pain in the butt, destroyed lots of hardware, found very few problems, and I don’t think that Apple is doing it any more. But it would have been a good way to look for that sort of issue.

Now I’m running the tests and getting the hang of that however noticed I’m mounting linprocfs directly to /proc and not what I put in the readme:

That’s not the branch- that’s where I change things before moving them over to the branch. So doing the tests is already improving things. Eventually I’d like to not rely linprocfs but the alternative is something like:

pid_t ppid = getpid();
struct kinfo_proc *proc = kinfo_getproc(ppid);

if (proc) {

    printf("name: %s\n", proc->ki_comm);
    printf("size: %ju\n", B2P((uint64_t)proc->ki_size));
    printf("rssize: %ju\n", (uint64_t)proc->ki_rssize);



Which would be potentially allocating and freeing that pointer to a struct over and over while fdbserver is polling its status (not very good IMO). Linprocfs is doing much more low level process locking and probably better of getting out of the way of SMP operations e.g.:

struct kinfo_proc kp;
segsz_t lsize;

     fill_kinfo_proc(p, &kp);


Which is more efficient so, obviously couldn’t come up with anything better than that. Your coding style suggests that it should be more efficient. The official FreeBSD port for right now sets these things to “0” it appears so I left it reading /proc/self/statm for now and that has worked out pretty well so far. I would be interested to know between Mach and Linux kernels which approach has worked out better so far in FoundationDB? Using std::ifstream with Linux or the clearly defined structs in task_basic_info on Darwin?

So, to answer this question directly definitely open to doing tests like this. I did 2 years as QA at ClearChannel and we used to conduct similar tests with SQL Server and failover to make sure on-air playout continued when I would pull the CATV from the HP servers we had. Failover was a new feature then so I was frequently on the phone with Microsoft reporting issues and testing resolutions. We were responsible for the SD to HD conversion for many televisions stations and while preparing for this tried to implement tests for our new master control software and station events like this which might occur. Looking ahead it would be possible for myself to invest in some hardware I’ve been pricing out NVMe since last night. Also have been busy trying to court more Devs and DBAs but that has been slow going there has been a “wait and see” attitude and rightly so. Don’t have a temperature controlled computer lab at my disposal like back then (Fiber, SAN, etc…) but for these things I’m committed to try and tap my network to get up something better than just a VM.

1 Like

You can see how kinfo_getproc is implemented, lift the necessary bits, and just sysctl into a stack allocated structure, so malloc isn’t required.

1 Like

Alright, I like this better than what I’ve seen out there so far. I mean really was splitting hairs over this but probably right should dig deeper than skipping over the details. Actually I was agonizing about people disagreeing over where a linprocfs should be mounted in the meantime. This solves that problem and much simpler than worrying about blocking or trying to introduce ad-hoc memory management. Thank you this should have been obvious. After the High Sierra virtualbox episode I had last night its obvious that there is a need to take a step back and un-narrow the focus quite a bit.

While converting the platform code (/flow/Platform.cpp) over to FreeBSD I’ve created a gist which is sort of a developers log and noticed that the Linux version will look for total packets sent by looking in /proc/net/snmp for out segments (outSegs) while the Darwin version looks at the interface out packets (if2m->ifm_data.ifi_opackets). This is in the function getNetworkTraffic. I’m assuming it’s because one is more a server and the other more used as a client- but its just an assumption. Taking @jkominek’s advice I’ve been using sysctl where possible but now I have the opportunity to use either one- TCP for the interface or overall TCP stats. Which one is preferred and is this actually due to the client versus server usage? Here is the link to the gist I’d be interested to know what you all think about this?

I spent about 30 seconds investigating why foundationdb even wants to know this, and got as far as the (only?) call site, which saves the produced values into variables named “machineWhatever”. Sounds like it expects whole-stack values.

In the general case, I don’t think a process can expect to figure out what interface(s) its traffic is being sent over, so it doesn’t seem constructive to try. And collecting interface-specific stats opens you up handling the situation where the interface you’ve chosen goes away.

I’m pretty sure that the goal was for stats on the various platforms to have semantics that match as closely as possible, and that when in doubt Linux should be taken as normative.

What’s the correct way to get whole-machine out segments on Darwin? My suspicion is the team would be happy to see a PR for this.

I agree that’s probably a good course of action luckily the TCP decision isn’t holding anything up. I’ll update my notes with this it definitely seems the way to go.

Oh yeah, I was searching around about the Darwin question and came across this too- it might have that information in there:

I’ve made some progress with getting bindings up and running. You definitely deserve the credit for switching me onto FoundationDB in the first place, I think I was looking at some stuff you were doing with Node.js at that point. If you have a repo for Node bindings I’d be interested in trying to put some pre-emptive support in there while I’m checking out the other languages. At least now I can finally start following the tutorials and do tests parallel to other platforms with a little more work hopefully. This repo does not build Linux safely and separate from the current PR but has the native stuff in there and FDBLibTLS builds as well, still sorting out the differences there with symbols between the Makefile options for GCC and Clang. Probably some flags are not necessary:

Awesome! The node bindings are here - See if you can get them to build. You’ll probably need to edit bindings.gyp to add a freebsd build entry, then $ node-gyp rebuild

I’m keen for a PR once you’ve got it working!

Just a quick update here, I have about 1K budget to devote to real equipment now and plan on following some of the advice here for NVMe

Tend to stick with Intel and don’t enjoy LEDs in the case either so when the paperwork for this PO goes through I’ll get the machine specs on here. Trying to get more debugging symbols generated, if possible, with libc++ right now. This is the “frobnicating” build step. If there is any standard equipment or setups out there in the wild which work well let me know. I have a TrueOS build which I’m using now with VMWare the notable thing here is extensions for Intel PRO/1000 adapter instead of Xhyve’s vmnet adapter. Xhyve will not be abandoned just will be using QTCreator for debugging. At this point include files, defines, hopping around inside the codebase all work in this environment (this picture was taken before all of that was configured).

Tonight I went through the class scheduler demo in python for the most part this evening and as a 6+ year LINQ developer and 10+ year C# dev FoundationDB may have switched me over! Reminds me of how powerful SQLAlchemy was before LINQ- fun fact GrooveShark which was located right in my home town here was built on top of SQLAlchemy.

@josephg I also gave the node bindings a try and it finds the includes and builds but just can’t figure out how to link against fdb_c. I think the bindings.gyp needs a FreeBSD entry in there maybe I get:

/usr/home/jessebennett/node_modules/foundationdb/build/Release/fdblib.node: Undefined symbol “fdb_select_api_version_impl”

and just can’t figure out how to rebuild this. I tried ‘npm rebuild’ and ‘npm i’ inside the node_modules folder but my node skills are not at the level of my python skills. Let me know if you can point me to something about how to get it to link against my file or verify it has found it. I saw something in your docs about EXTERNAL_CLIENT_DIRECTORY maybe that could help too.

1 Like

Awesome! Yep you’re absolutely right - we’ll need to add a 'OS=="freebsd"' target to the bindings.gyp. (I assume thats what it should be - the node-gyp documentation isn’t great). The commands you’re looking for are node-gyp configure (needed whenever you edit binding.gyp) followed by node-gyp build. Its probably getting that error because -lfdb_c isn’t being added to the link settings. You can also run npm run install to re-run any post-installation actions in your project, which will check if you have a working build artifact and configure & build if you don’t.

If you can’t get it working I’d be happy to have a poke myself. Is your freebsd FDB fork up somewhere?

The EXTERNAL_CLIENT_DIRECTORY thing is a bit different - its used at runtime to specify extra locations to search for libfdb_c. The reason its there is sort of weird - at runtime it actually loads any & all copies of that it can find. When connecting to a server it’ll try all of the copies of the C library to see which one lets it actually connect. The reason they’ve designed it that way is that fdb doesn’t use a stable network protocol. To allow users to upgrade to a new version of the cluster without their applications going down, they expect people to deploy multiple copies of the .so file (for the old version and the new version). That environment variable (& equivalent configuration options) pass extra search paths to dyld at runtime, so you can put all your libfdb_c-*.so files in a configurable location.

But the problem you’re running into is a compilation problem. Edit binding.gyp.