Satellite dc is wrong in a remote region

I’m trying to configure fdb’s two-region deployment.

I setup 5 vms:

  • two west coast: wc1, wc2
  • two east coast: ec1, ec2
  • one witness region: witness

3 coordinators: wc1, ec1, witness

The regions.json is as follows:

{
  "usable_regions": 2,
  "regions":[
    {
        "datacenters":[{
            "id":"wc1",
            "priority":1
        },{
            "id":"wc2",
            "priority":0,
            "satellite":1,
            "satellite_logs":1
        }],
        "satellite_redundancy_mode":"one_satellite_single"
    },
    {
        "datacenters":[{
            "id":"ec1",
            "priority":0
        },{
            "id":"ec2",
            "priority":0,
            "satellite":1,
            "satellite_logs":1
        }],
        "satellite_redundancy_mode":"one_satellite_single"
    }
  ]
}

After confiuration accepted by fdb, the status seems not right, because remote region has two satellite dcs

fdb> status

Using cluster file `fdb.cluster'.

Configuration:
  Redundancy mode        - single
  Storage engine         - memory-2
  Coordinators           - 3
  Usable Regions         - 2
  Regions:
    Primary -
        Datacenter                    - wc1
        Satellite datacenters         - wc2
        Satellite Redundancy Mode     - one_satellite_single
    Remote -
        Datacenter                    - ec1
        Satellite datacenters         - wc2, ec2
        Satellite Redundancy Mode     - one_satellite_single

Cluster:
  FoundationDB processes - 5
  Zones                  - 5
  Machines               - 5
  Memory availability    - 13.0 GB per process on machine with least available
  Fault Tolerance        - 0 machines (1 without data loss)
  Server time            - 02/23/21 19:42:44

Data:
  Replication health     - Healthy
  Moving data            - 0.000 GB
  Sum of key-value sizes - 0 MB
  Disk space used        - 213 MB

My questions are:

  • Is the configuration wrong?
  • How to correct the cluser status? I want remote with only one satellite dc (ec2).

fdb version is: 6.2.28
os distro: CentOS 7.8

Thanks

This seems like it’s a reporting bug in fdbcli. The satellites are stored in a list that isn’t emptied in between regions, so each region prints all prior satellites:

Thank you.

Can I check the configuration from some system keys?

I’ve filed a PR to address this:

Can I check the configuration from some system keys?

You can check the output of status json in fdbcli, which is where status gets the data that it uses to create the summary output above. There should be data about your region configuration in cluster.configuration.regions.