Help converting cloud AZ & Region requirements to Region & DC setup?

I am interested in setting up a FoundationDB cluster with the following requirements. I am using AWS AZ & Region terminology for this. I am also quite new to FDB terminology, so please be patient with me XD I’ve tried reading through all the forum posts and docs I found relevant.

  1. 3 nodes per AZ each running as a single FDB process
  2. 3 AZs per region
  3. Multi region (2 east and 1 west)
  4. Primary operation should be between the 2 east coast regions, durable replicating between them
  5. If a single AZ in east1 fails, then east2 becomes the primary.
  6. If 2 or more AZs in a single east coast region is lost, then we replicate durably to west coast (thus increasing latencies significantly, but keeping multi-region).
  7. If both east coast regions are lost, west coast is promoted to primary (this assumes they weren’t lost before west coast could become consistent with east)

My hypothesis is to ignore the AZ-granularity and actually treat the entire east coast as 1 region in FDB terms, and the west as another. I am unsure whether a DC should represent an AWS region, or an AWS AZ. Based on the configuration options I am not sure if this precise behavior can be obtained with FDB, especially if I ignore the AZ and treat each east coast AWS region as a FDB DC. As a result I would be getting varying east coast latencies depending on whether a transaction went across the AWS regions or not, but I would get the desired behavior until a majority of the east coast was lost. On top of that I might be losing majority of an AWS region, and not failing over because the DCs (AWS regions) are not aware of the locality of AZs below them.

I am also not sure what sattelite_logs and satellite_redundancy_mode I would use here. My guess is that I would use 2 and two_satellite_fast respectively?

Ultimately, my goal is to keep data on the east coast until a majority of an east coast is lost. It seems with cloud (or at least AWS) an entire region will be lost rather than just losing AZs, as schedulers like k8s will quickly provision a new node in a different AZ as needed and thus that “DC” will come back online. Perhaps I would need to prevent this, or have some scripting on node start to specify the DC based on the region/az a pod is created in.

But perhaps this is also not the best setup and I should decide to fail over to the west coast earlier, without sync replication across coasts?

Following Multi-region configuration - spreading copies inside a region across AZs I see maybe I should be using the data hall for each AZ, and a DC is an AWS region, and finally each coast as an FDB region.

I see that In release 6.0, FoundationDB supports at most two regions., but 7.2 is the most recent version, is this still the case? I could not find anything in changelogs about this. This feels like the only limitation right now that prevents simple mapping of DC->AZ to achieve this.

Studying the docs more it seems like the better case would actually be to have 2 east and 2 west, and fail over to the west coast when we lose a single DC (AWS region) in the east coast to keep local latencies low.

To me this means using the three_data_hall redundancy mode, and specifying each data hall to be an AZ so to preserve appropriate spread of data will only help preserve the original DC for longer before deciding to fail over to the west coast. This prevents the situation where all required copies to keep a DC alive is stored in one AZ, so losing a single AZ only can never mean that we fail over to another region.