I am running my cluster in AWS. The cluster is binding to the private IP’s, which are all the host knows about. I can connect via the fd client from an EC2 that can route using private IP’s. I can’t get a good bind from my home lab that is hitting AWS via public IP using internet gateway that routes back to the private IP the EC2 cluster instances are running. Is there a workaround?
The cluster file has got a bunch of private addresses in it, right? And then you copied that over to your client? What did you do about the private addresses listed in the file?
I suspect the best and easiest solution is to set up Wireguard and just give your home system proper, functional, routing to those private EC2 addresses. Probably any working solution is going to look kind of like that.
I have that all working. I just can’t connect via NAT. I expect it’s not possible due to the way the db connection string is formatted.
FoundationDB stores IP addresses into the database, and other processes read them out, and assume that they can connect to them. It also transmits “interfaces” from one process to another, which is an IP:port with some extra data, and assumes that if process A can connect to IP:Port, then B can also connect to IP:Port. The internal design of FDB is just very unfriendly to NAT, so you need to have your clients on the same network as your fdbservers.