Process not listed in the cluster file still reporting as a coordinator

I have the following fdb_cluster:passwd@10.1.0.22:4500:tls,10.1.0.30:4500:tls,10.1.0.53:4500:tls,10.2.0.11:4500:tls,10.2.0.12:4500:tls,10.2.0.14:4500:tls,10.3.0.56:4500:tls,10.3.0.57:4500:tls,10.3.0.65:4500:tls

Still on node with IP: 10.1.0.11 I can see this

trace.10.1.0.11.4500.1758683320.CQNVsv.1.53.json:{ “Severity”: “10”, “Time”: “1758832646.458266”, “OriginalTime”: “1758683320.937964”, “DateTime”: “2025-09-25T20:37:26Z”, “OriginalDateTime”: “2025-09-24T03:08:40Z”, “Type”: “Role”, “ID”: “c1f09924584df6c5”, “As”: “Coordinator”, “Transition”: “Begin”, “Origination”: “Recruited”, “OnWorker”: “0000000000000000”, “ThreadID”: “11819242166644492999”, “Machine”: “10.1.0.11:4500”, “LogGroup”: “fdb-cluster”, “Roles”: “CD”, “TrackLatestType”: “Rolled” }

In theory the node shouldn’t be a coordinator right ?

Note: I’m not using K8s in this cluster and I think at some point in the past this node was a coordinator.

Please do the following validations. This may help you to troubleshoot the issue further.

In foundationdb, we can implicitly map each process with different roles like coordinator, log, stateless and storage.

  1. Log in to the FDB node 10.1.0.11 and verify the role mapped to the fdbserver process.
  2. Since multiple FDB processes can run on a single node, check the status JSON file to see the roles assigned to the processes on 10.1.0.11.
  3. After starting the coordinator process, update the cluster file by running the set command with both the existing IPs and the new IPs.

Was it supposed to have been a coordinator until like two days prior? If so, you’re being fooled by rolled log messages. (Which would especially make sense to me if this worker hasn’t been recruited as any other role since that point in time.)

If you changed coordinators before that, then… coordinators do hold onto their state after they’re configured away from being coordinators, as they hold forwarding information so that if a client still connects to them as a coordinator, then they’ll get the updated set of coordinators to connect to instead and can update their own cluster file accordingly. I don’t remember how long that sticks around though, nor how that intersects with the Type=Role messages.

The only way that this would be concerning is if there’s still some clients or fdbserver processes with old cluster files still trying to connect to it as a coordinator. I’d assume you’d notice that through some sort of availability issues though, and thus this is likely something that’s a harmless logging thing one way or another. If it’s causing you any issues from log grepping or something, filtering out TrackLatestType=Rolled is generally wise regardless.

Maybe it was I think at the end I ended up cleaning everything and restarting fresh as it was a test cluster.