FDB + SSL and operator automation

I have been trying to use SSL with FDB lately, initially I thought that a wild card SSL would be sufficient but it’s not as it seems that the operator use IP to connect to the sidecar when SSL is enabled for it:

"error":"GET https://10.193.1.80:8080/substitutions giving up after 1 attempt(s): Get \"https://10.193.1.80:8080/substitutions\": tls: failed to verify certificate: x509: cannot validate certificate for 10.193.1.80 because it doesn't contain any IP SANs"

With the help of cert-manager and some adhoc Certificate object it’s possible to get cert-manager to issue a certificate for the pod that has the pod’s name and IP.
Could it be possible for the operator to optionally generate this, the main drawback is that it locks with one solution to issue certificates. Alternatively I was wondering why it uses the IP of the pod and not its name to connect.

Could it be possible for the operator to optionally generate this, the main drawback is that it locks with one solution to issue certificates. Alternatively I was wondering why it uses the IP of the pod and not its name to connect.

The decision to connect to the pod sidecar using the Pod IP was made a long time ago and I don’t remember the details (and I think I didn’t worked on the operator at that time). Right now there are two options that would work without any additional modification:

  1. Using the unified image instead of the split image (the unified image is the default in the operator since the 2.0 release).
  2. Adding the DISABLE_SIDECAR_TLS_CHECK=1 env variable to the operator (link: fdb-kubernetes-operator/docs/manual/tls.md at main · FoundationDB/fdb-kubernetes-operator · GitHub) with all it’s implications.

In theory we could implement support for connecting to the sidecar with the DNS entry instead of the Pod IP, but I believe this requires at least a headless service and since the split image is in “maintenance mode” we are not spending much time on adding new feature to it. If that’s a feature you need and you don’t want to use the unified image yet, feel free to open a PR.

I will have to start to evaluate the unified image, so far the sidecar suited us well and we made some customization on how the zoneID is calculated by adding a script that is calculating the right value for us and setting ADDITIONAL_ENV_FILE so that this script is sourced before actually starting the pod.
If this is still possible with the unified image then I guess I should look at switching.

This is already possible with the unified image: foundationdb/fdbkubernetesmonitor/main.go at main · apple/foundationdb · GitHub, you would set in the main container (as the sidecar is only used for upgrades) the ADDITIONAL_ENV_FILE. In addition the unified image supports to read labels from the node where the pod is running: fdb-kubernetes-operator/docs/manual/fault_domains.md at main · FoundationDB/fdb-kubernetes-operator · GitHub.