Changing the PVC size in spec resulting in new pods being created and the old pods still remain

HI,
I notice that if I change the storage size of a pvc and reapply the cluster file, new pods that uses that storage definition will appear using the new size but the old pods are still there. I know one can expand the pvc size by editing the pvc yaml directly and if the storageclass support dynamic expansion, then it will work. But not sure if the operator behaviour is expected? This result in a data redistribution to the new pods.
To get rid of the old pods, is this steps ok? Exclude the process on the old pods from the db. After DB has rebalanced, delete the old pods and remove those ip from the system. Is that right?

I notice that if I change the storage size of a pvc and reapply the cluster file, new pods that uses that storage definition will appear using the new size but the old pods are still there. I know one can expand the pvc size by editing the pvc yaml directly and if the storageclass support dynamic expansion, then it will work. But not sure if the operator behaviour is expected? This result in a data redistribution to the new pods.

This is the expected behaviour and is documented here: https://github.com/FoundationDB/fdb-kubernetes-operator/blob/main/docs/manual/customization.md#customizing-the-volumes. This is done to support all cases independent of local-storage, storage classes that support expansion and storage classes that doesn’t support this. In the future the operator will probably be more flexible and support different model for updating different fields.

To get rid of the old pods, is this steps ok? Exclude the process on the old pods from the db. After DB has rebalanced, delete the old pods and remove those ip from the system. Is that right?

There is no human interaction required and the operator will remove the old Pods (including the PVC) once the data is moved away from those processes.

So, just need to wait. Is that correct? Which version of operator support this?

On a separate question, what if someone removed the pods manually BEFORE the data movement finished and resulting in UNHEALTHY: No replicas remain of some data. Is running restore the only way to recover?

A follow up to that question.
What if say, I changed the size, and the new storage pods got created. And before it can finish migration, I deleted all the old storage pods and now the DB is telling me it is in a “Unhealthy state, some data don’t have any replica”. Is there a way to clean the db of all the data and put it back into a healthy state so that I can do a restore?
Or recreating the database from scratch is the only way (and then apply the restore).

So, just need to wait. Is that correct? Which version of operator support this?

I believe all versions of the operator should support this.

On a separate question, what if someone removed the pods manually BEFORE the data movement finished and resulting in UNHEALTHY: No replicas remain of some data. Is running restore the only way to recover?

Since version 1.2.0 those Pods are recreated until they are fully excluded (data movement finished): Release v1.2.0 · FoundationDB/fdb-kubernetes-operator · GitHub

What if say, I changed the size, and the new storage pods got created. And before it can finish migration, I deleted all the old storage pods and now the DB is telling me it is in a “Unhealthy state, some data don’t have any replica”. Is there a way to clean the db of all the data and put it back into a healthy state so that I can do a restore?

Like above the operator (1.2.0+) should recreate those Pods again and the cluster should be able to finish the data movement. You you to remove the whole key space, see this post for the command: Can't clear database (delete all data)