Oleg, good hint. I looked into the topology reports of the destination cluster before and after the DR, and found out that, in the after report, many roles disappeared and they affected all 3 DCs.
The BEFORE report:
FDB PROCESS Breakdown (Count) by DC and ROLE
CNT DC-Role
---- -------------
1 dc1 cluster_controller
1 dc1 data_distributor
10 dc1 log
1 dc1 master
2 dc1 proxy
1 dc1 ratekeeper
1 dc1 resolver
48 dc1 storage
10 dc2 log
10 dc3 log
9 dc3 router
48 dc3 storage
The AFTER report:
FDB PROCESS Breakdown (Count) by DC and ROLE
CNT DC-Role
---- -------------
1 dc1 cluster_controller
1 dc1 data_distributor
1 dc1 master
1 dc1 ratekeeper
3 dc2 coordinator
3 dc3 coordinator
The above reports were produced from the status JSON outputs by a script.
When I looked into the status json files. The before-json had a configuration section.
"configuration" : {
"coordinators_count" : 9,
"excluded_servers" : [
],
"log_spill" : 2,
"logs" : 30,
"proxies" : 2,
"redundancy_mode" : "triple",
"regions" : [
{
"datacenters" : [
{
"id" : "dc1",
"priority" : 2
},
{
"id" : "dc2",
"priority" : 0,
"satellite" : 1
}
],
"satellite_logs" : 10,
"satellite_redundancy_mode" : "one_satellite_double"
},
{
"datacenters" : [
{
"id" : "dc3",
"priority" : 1
}
]
}
],
"storage_engine" : "ssd-2",
"usable_regions" : 2
},
However, the entire configuration section is gone in the after-json.
This is unexpected. It seems to me that DR has conflicts with the destination cluster’s 2-region/3-DC architecture.
@osamarin Do you have any idea? Thanks.