You are correct that major/minor releases of FoundationDB are always protocol incompatible, and patch releases are always protocol compatible. This means that rolling upgrades are not an option between major/minor versions. Hopefully I can provide a little bit of insight into why we made this choice.
The first consideration is that the master, proxies, resolvers, and transaction logs are a unit in FoundationDB. If any process recruited as any of those roles fail, we recruit all new versions of each of them. This recruitment generally is completed in less than a second, so a machine failure does not have too big of a latency impact, but a rolling upgrade would cause a recovery per machine rebooted.
The second consideration is that because FoundationDB has so many specialized roles the protocol is very complex. Testing interactions between different versions communicating with each other would not be trivial. To begin with we would need to develop methods for deterministically running simulations across two different binaries. We obsessively tests everything, including our upgrades, and we are not going to support a feature we cannot test rigorously.
Finally, even without rolling upgrades, the latency impact of our current upgrade process is generally less than the latency impact of a machine failure. Generally ongoing client operations are delayed by less than a second while a upgrade is happening. Also, because of how rigorously we test upgrades, we do not have to worry about bringing down the database when upgrading.
Upgrades with FoundationDB happen in three steps:
First, load the new client library into your clients (https://apple.github.io/foundationdb/api-general.html#multi-version-client-api). This lets clients know the protocol for both the old and new version. It will attempt to connect with both versions simultaneously, so as soon as the servers are upgraded the client will automatically be able to connect.
Second, load the new version of the fdbserver onto the server machines. Fdbmonitor will continue running the old version, however the next time it reboots a process it will use the new binary.
Finally, use fdbcli to force all process all the processes in the cluster to reboot at the same time. This is accomplished with the kill command.
The result of this process is that all servers change to the new version with milliseconds of each other, and clients can connect to the cluster as soon as the processes come back online. Generally, both the transaction logs and storage servers can recover the state from disk in 100ms, and once they are done recovering the rest of the recovery process takes less than a second. The only thing to watch out for is to avoid doing an upgrade if the transaction logs have large queues. Basically, do not upgrade while the cluster is in saturation or cluster is recovering from a machine failure.
Currently DR does not work between versions, but that should be very easy to add. DR is implemented as an external process, so we just need to integrate it with multi version client api.