Hello, I have a cluster with the configuration below (form fdbcli status)
I’m setting all the process classes in configuration and i’m making sure to have the classes well separated. however i still get into The situation below: one of the storage servers also has the log role.
Using cluster file `/etc/foundationdb/fdb.cluster'.
Configuration:
Redundancy mode - double
Storage engine - ssd-redwood-1-experimental
Coordinators - 5
Desired Commit Proxies - -1
Desired GRV Proxies - -1
Desired Resolvers - -1
Desired Logs - -1
Desired Remote Logs - -1
Desired Log Routers - -1
Usable Regions - 2
Regions:
Primary -
Datacenter - DC1
Satellite datacenters - DC2, DC3
Satellite Redundancy Mode - one_satellite_triple
Satellite Logs - 1
Remote -
Datacenter - DC3
Satellite datacenters - DC2, DC1
Satellite Redundancy Mode - one_satellite_triple
Satellite Logs - 1
Cluster:
FoundationDB processes - 89
Zones - 13
Machines - 13
Memory availability - 5.9 GB per process on machine with least available
Retransmissions rate - 0 Hz
Fault Tolerance - 2 machines
Server time - 10/21/22 12:43:10
Data:
Replication health - Healthy
Moving data - 0.000 GB
Sum of key-value sizes - 0 MB
Disk space used - 80.396 GB
Operating space:
Storage server - 1800.7 GB free on most full server
Log server - 1800.6 GB free on most full server
Workload:
Read rate - 228 Hz
Write rate - 0 Hz
Transactions started - 54 Hz
Transactions committed - 1 Hz
Conflict rate - 0 Hz
Backup and DR:
Running backups - 0
Running DRs - 0
Client time: 10/21/22 12:43:10
WARNING: A single process is both a transaction log and a storage server.
For best performance use dedicated disks for the transaction logs by setting process classes.
inventory of classes and roles taken from status json (scroll down to xx.xx.xx.26:4502:tls
)
Address | Class | Roles
xx.xx.xx.10:4500:tls | coordinator | coordinator
xx.xx.xx.10:4501:tls | log |
xx.xx.xx.10:4502:tls | log |
xx.xx.xx.10:4503:tls | log | log
xx.xx.xx.10:4504:tls | log | log
xx.xx.xx.13:4500:tls | coordinator | coordinator
xx.xx.xx.13:4501:tls | log |
xx.xx.xx.13:4502:tls | log |
xx.xx.xx.13:4503:tls | log |
xx.xx.xx.13:4504:tls | log |
xx.xx.xx.18:4500:tls | coordinator | coordinator
xx.xx.xx.18:4501:tls | log |
xx.xx.xx.18:4502:tls | log |
xx.xx.xx.18:4503:tls | log | log
xx.xx.xx.18:4504:tls | log |
xx.xx.xx.19:4500:tls | coordinator | coordinator
xx.xx.xx.19:4501:tls | log | log
xx.xx.xx.19:4502:tls | log |
xx.xx.xx.19:4503:tls | log | log
xx.xx.xx.19:4504:tls | log |
xx.xx.xx.22:4500:tls | storage | storage
xx.xx.xx.22:4501:tls | storage | storage
xx.xx.xx.22:4502:tls | storage | storage
xx.xx.xx.22:4503:tls | storage | storage
xx.xx.xx.22:4504:tls | storage | storage
xx.xx.xx.22:4505:tls | storage | storage
xx.xx.xx.22:4506:tls | storage | storage
xx.xx.xx.22:4507:tls | storage | storage
xx.xx.xx.23:4500:tls | storage | storage
xx.xx.xx.23:4501:tls | storage | storage
xx.xx.xx.23:4502:tls | storage | storage
xx.xx.xx.23:4503:tls | storage | storage
xx.xx.xx.23:4504:tls | storage | storage
xx.xx.xx.23:4505:tls | storage | storage
xx.xx.xx.23:4506:tls | storage | storage
xx.xx.xx.23:4507:tls | storage | storage
xx.xx.xx.26:4500:tls | storage | storage
xx.xx.xx.26:4501:tls | storage | storage
xx.xx.xx.26:4502:tls | storage | log,storage <---- HERE
xx.xx.xx.26:4503:tls | storage | storage
xx.xx.xx.26:4504:tls | storage | storage
xx.xx.xx.26:4505:tls | storage | storage
xx.xx.xx.26:4506:tls | storage | storage
xx.xx.xx.26:4507:tls | storage | storage
xx.xx.xx.27:4500:tls | storage | storage
xx.xx.xx.27:4501:tls | storage | storage
xx.xx.xx.27:4502:tls | storage | storage
xx.xx.xx.27:4503:tls | storage | storage
xx.xx.xx.27:4504:tls | storage | storage
xx.xx.xx.27:4505:tls | storage | storage
xx.xx.xx.27:4506:tls | storage | storage
xx.xx.xx.27:4507:tls | storage | storage
xx.xx.xx.30:4500:tls | stateless |
xx.xx.xx.30:4501:tls | stateless |
xx.xx.xx.30:4502:tls | stateless | router
xx.xx.xx.30:4503:tls | stateless | router
xx.xx.xx.30:4504:tls | stateless |
xx.xx.xx.30:4505:tls | stateless | router
xx.xx.xx.30:4506:tls | stateless |
xx.xx.xx.30:4507:tls | stateless |
xx.xx.xx.33:4500:tls | stateless |
xx.xx.xx.33:4501:tls | stateless |
xx.xx.xx.33:4502:tls | stateless |
xx.xx.xx.33:4503:tls | stateless |
xx.xx.xx.33:4504:tls | stateless |
xx.xx.xx.33:4505:tls | stateless |
xx.xx.xx.33:4506:tls | stateless |
xx.xx.xx.33:4507:tls | stateless |
xx.xx.xx.6:4500:tls | stateless |
xx.xx.xx.6:4501:tls | stateless | commit_proxy
xx.xx.xx.6:4502:tls | stateless |
xx.xx.xx.6:4503:tls | stateless | grv_proxy
xx.xx.xx.6:4504:tls | stateless | data_distributor
xx.xx.xx.6:4505:tls | stateless | commit_proxy
xx.xx.xx.6:4506:tls | stateless |
xx.xx.xx.6:4507:tls | stateless | master
xx.xx.xx.7:4500:tls | stateless | resolver
xx.xx.xx.7:4501:tls | stateless | commit_proxy
xx.xx.xx.7:4502:tls | stateless |
xx.xx.xx.7:4503:tls | stateless | ratekeeper
xx.xx.xx.7:4504:tls | stateless |
xx.xx.xx.7:4505:tls | stateless |
xx.xx.xx.7:4506:tls | stateless | cluster_controller
xx.xx.xx.7:4507:tls | stateless |
xx.xx.xx.9:4500:tls | coordinator | coordinator
xx.xx.xx.9:4501:tls | log |
xx.xx.xx.9:4502:tls | log | log
xx.xx.xx.9:4503:tls | log | log
xx.xx.xx.9:4504:tls | log | log
As you can see there are processes assigned log class that do not have any role, but still i have the storage one that gets also a log role.
Currently i’m thinking this might happen while i’m still joining nodes and fdb never gets to reassign it ?
I would also mentioned here that currently the cluster is unused yet. So is there possible that as transactions gets in the role will be reassigned ?
What would be the best way to prevent this things from happening ? all the classes are pre-assigned in config…
LE: fdb version 7.1.15 ( but i’m kind of sure i saw this behavior on 7.1.23 also)