91158d4042b2 foundationdb/foundationdb:6.3.15 "/usr/bin/tini -g --…" 2 days ago Exited (20) 9 hours ago _fdb-server-1_1
c3ed74947f43 foundationdb/foundationdb:6.3.15 "/usr/bin/tini -g --…" 2 days ago Up 2 days 0.0.0.0:4500->4500/tcp _fdb-coordinator_1
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
486348df172b fdbgolangsample_app "/start.bash" 2 days ago Up 2 days 0.0.0.0:8080->8080/tcp fdbgolangsample_app_1
fa6e3729a803 foundationdb/foundationdb:6.3.15 "/usr/bin/tini -g --…" 2 days ago Exited (20) 8 minutes ago fdbgolangsample_fdb-server-1_1
fd8a17a2195f foundationdb/foundationdb:6.3.15 "/usr/bin/tini -g --…" 2 days ago Up 2 days 0.0.0.0:4500->4500/tcp fdbgolangsample_fdb-coordinator_1
Looks like Go and Python bindings don’t increase memory usage with FDB 6.2, but they increase it with FDB 6.3. So the cause of the problem is probably FDB 6.3.
I don’t have any experience with the golang sample, but I don’t think I’ve observed a leak in other 6.3 usage yet. I’ll try to reproduce with the sample, and if it works we can try to get to the bottom of it.
I haven’t definitively shown this to be the problem, but one thing I noticed about the 6.3 sample is that there is no /var/fdb/logs directory, and yet this is where the server processes are writing their logs. Creating this directory with a running process resulted in it generating all of the files that it had been intending to write.
Meanwhile, when I tried this sample with a 6.2 image, the logs directory was already present.
My theory, then, is that the increased memory usage is a collection of all the trace log events that are being buffered. If so, then this problem is primarily one with how the docker image is setup. Perhaps the server process should also not store an infinite number of trace logs in this case, either.