Set primary data center does not work as expected

(Hieu Nguyen) #1

I have two setups of FDB clusters: SingleDC and MultiDCs (2 regions with 1 primary DC, 1 standby DC, 1 satellite DC).

I have my client issuing requests (read-only) to each FDB cluster. With MultiDCs, my client is located at the primary DC. However, the latency of the MultiDC set-up is higher than the SingleDC. In my client, I already set the data center id to be the primary DC.

If I understand correctly, the latency with the MultiDCs should be comparable with SingleDC with my setup, since it is read-only and with MultiDC, the client should always go to the primary DC to get the read version and perform the read. Am I missing anything? My FDB version is 6.0.15.

Updated:

  • If I put my client at the primary DC, the latency is ~2 times higher (15ms vs 7.5ms).
  • If I put my client at the standby DC, the latency is ~5 times higher (35ms – add 20ms more).
    (Note that this is read transactions that may read multiple key-value pairs, not reading a single key-value pair).
(Alex Miller) #2

If you ssh to a host in your primary DC, what’s the round-trip time to your satellite DC, and what’s the round-trip time to the remote DC? (ie. ping them, and what’s the latency?)

(Hieu Nguyen) #3

@alexmiller:
Roundtrip time from one node in primary DC to another node in remote DC is ~ 7 to 8ms.
Roundtrip time from one node in primary DC to another node in satellite DC is ~ 7 to 8ms.

If the transaction is read-only (and is executed at the primary DC), how frequent does it go the other DCs or usually how many roundtrips it makes to the other DCs? As I understand, it should not go either satellite DC or remote DC, should it?