I’m talking about the output that it sends to files named
trace-<IP, port, timestamp, random chars>.json (or presumably
.xml if you’ve not got
trace-format = json configured).
We initially had an issue that our clients were turning on the server request tracing flag, which was causing a huge amount of log spam (and had some sort of memory leak on the client side…).
We’ve now disabled that, but the logs from FDB are still around a 40x increase over what we had across all our environments previously. Occasionally there’s a warning or above that we’d like to keep, but most of it is metric information from events like
Role/Refresh etc. We already graph the relevant ones of these metrics from other sources, we don’t want them in the log output.
I am aware you can limit the size of the logfiles on disk, but we’re in the cloud so the hosts are ephemeral, and grab a data volume and associated IP on startup before running FoundationDB. They can also run into issues where the host is unreachable. As such, we export all our logs to an external system (DataDog, in this case), so that we can refer to them if the host is gone.
FoundationDB’s log output has caused a pretty massive spike in our costs for log ingestion, which I’m under pressure to reduce. But I don’t want to ditch these files being exported completely, as that will severely limit our ability to troubleshoot if something goes wrong with a host.
I can’t find anything in the docs about even something as wide-ranging as setting a minimum log-level for output to these files. How do I reduce the log spam here?