In one of our dev FDB clusters, there were two fdb nodes (Kubernetes pods/containers) that didn’t purge trace log files as expected, instead accumulated log files with total size exceeded maxlogssize.
How many processes do you have running. AFAIK these limits are per process. So if you have 5 processes and each has 5GiB for its maxlogssize, this directly will use up to 25 GiB of disk space.
Hello, how about if a pod got restarted? In that case, the ip address has been changed so will it be treated as a new process and then the log size used will be re-calculated from 0? Thanks!
That case is currently not well handled and I’m not aware of a solution for this since the trace_file_identifier is only supported in the client but not in the fdbserver binary to my knowledge.