Migrating from a large cluster to another

what kind of TCP performance tuning have you done to improve the latency and response times of the network. sometimes improving how the underlying OS handles network calls can greatly improve the CC.

These are our FDB specific sysctl configurations we run on on ubuntu and have had great success in improving our reliability.

      - { name: net.core.somaxconn, value: 1000 }
      - { name: net.core.netdev_max_backlog, value: 5000 }
      - { name: net.core.rmem_max, value: 16777216 }
      - { name: net.core.wmem_max, value: 16777216 }
      - { name: net.ipv4.tcp_wmem, value: "4096 12582912 16777216" }
      - { name: net.ipv4.tcp_rmem, value: "4096 12582912 16777216" }
      - { name: net.ipv4.tcp_max_syn_backlog, value: 8096 }
      - { name: net.ipv4.tcp_slow_start_after_idle, value: 0 }
      - { name: net.ipv4.tcp_tw_reuse, value: 1 }
      - { name: net.core.default_qdisc, value: "fq_codel" }
      - { name: net.ipv4.tcp_mtu_probing, value: 1 }
      - { name: net.ipv4.tcp_tw_recycle, value: 1 }

for instance one of our larger SSD clusters is roughly around 16 nodes w/ 256 procs and 79TB in KV store, we can sustain around 28k writes and 33k reads and add and remove capacity without incident.

  Read rate              - 28099 Hz
  Write rate             - 33465 Hz

Our disk configuration and layout though may be drastically different than yours, and we heavily utilize AWS primitives for high speed read caches with high hit rates and LVM raids too improve write throughput.