After I restart a machine running FoundationDB, (restarting fdbmonitor via systemd,) the total free memory on the machine increases. What causes fresh FDB processes to use less memory than FDB processes running for a long time?
Is there an optimization that caches frequently used data in memory while FDB is running?
Some memory used by FDB is not released back to the OS but rather kept internally and reused. This causes its memory footprint to increase over size because each time the workload requires a high highest amount of short lived memory usage some of that memory will be of the internally-reusable type I mentioned above.
In particular, the “Storage Queue” size in Storage Servers includes the MVCC data structure which holds mutations not yet written to disk. It is normally small in the healthy case but if the workload is generating a high write rate to the Storage Server or if the Storage Server falls behind then the Storage Queue will grow up to ~2GB. Once the Storage Server catches up, the memory used by this state will still be part of the process and will be used again when the MVCC structure grows.
There are also some smaller in-memory data structures which grow over time or can grow/shrink based on the workload leaving some memory slack.
The page cache (for non memory-* storage engines) will grow to the configured size at whatever speed your workload accesses data and then it will remain there, evicting pages as needed for new read/write operations.