Adding a resolver causes cluster to become non-reconciled

HI,
Whenever we expand our cluster to add a new resolver pod to the cluster, the cluster has become non-reconciled and unstable. Is there a particular requirement on how many resolver and cpu/memory resource requirement to that resolver pod?
foundationdb:
cluster:
process_counts:
cluster_controller: 1
proxy: 3
resolver: 1
stateless: -1
storage: 6

But if we reduce the proxy. to 2, the cluster is ok, so why this strange behaviour?

It’s hard to say without more context on why the reconciliation is not completing. If reducing another process count helps, it could be that your Kubernetes cluster doesn’t have enough resources for the new resolver process.

Ignore my previous message; I forgot something more critical. The resolver field in process counts is deprecated, and is known to not work correctly: fdb-kubernetes-operator/cluster_spec.md at master · FoundationDB/fdb-kubernetes-operator · GitHub

I think I have read in another thread that need to use the DatabaseConfiguration spec to do this resolver thing, is that correct?

The databaseConfiguration section does something different, but that may be what you want. databaseConfiguration tells the database the number of workers to recruit for each role, whereas processCounts tells the operator the number of pods to run for each process class. If you don’t specify processCounts, the operator will infer the process counts based on the database configuration, and I think that is the way to go, unless you have a specific requirements that is different from what the operator infers. We have some more information on this in the user manual, in the Scaling section.

1 Like