We have some machines and cpu and a little ssd

We have 10 machines with 40cpu and 6 ssd 2t per machine.

We read documentation and found this

https://apple.github.io/foundationdb/configuration.html#guidelines-for-setting-process-class

class : Process class specifying the roles that will be taken in the cluster. Recommended options are storage , transaction , stateless

but here we found

and class settings is proxy, log and we found class tests

where we can found full description for this opion class ???

and how calculate amount of process of each class type for machine?

Documentation is definitely a thing that we need to improve on…

As of FDB 6.2 (which will be released soon), this will be the possible classes (from the source code):

	enum ClassType { UnsetClass,
                                      StorageClass,
                                      TransactionClass,
                                      ResolutionClass,
                                      TesterClass,
                                      ProxyClass,
                                      MasterClass,
                                      StatelessClass,
                                      LogClass,
                                      ClusterControllerClass,
                                      LogRouterClass,
                                      DataDistributorClass,
                                      CoordinatorClass,
                                      RatekeeperClass, 
                                      InvalidClass = -1 };

As you can see, there are many more than mentioned in the documentation (although to be fair, some of those don’t exist in FDB 6.1). However, for most workloads you probably won’t need to explicitly set these classes.

The strategy I would recommend for you:

  1. Start with something simple. I think a good strategy would be to run per machine:
  • two TLogs (these can share one disk)
  • 10 storage processes (two per disk)
  • 1-2 Stateless processes
  1. This probably will run reasonably well.
  2. If you run into any issues, you should reiterate and maybe refine the configuration.

You probably should make sure that you have 8GB memory per process. I would expect that a configuration like this will be good enough for a long time before you’ll run into any issues. Afterwards you will need to identify bottlenecks and refine accordingly.

The warning from fdbcli that he’s getting is from not explicitly setting these classes. By default, there’s nothing that stops FDB from recruiting storage servers on TLogs, which is a sufficiently frequent source of poor performance that a warning was finally added for it.

I don’t think I’ve ever benchmarked more than one TLog per disk. I would expect that this would increase latency due splitting the required work of fsync()ing data into two calls?

Ok
for stateless class wich ssd i must use?
Tlog ssd or one state less process for 1 disk
6 ssd = 1 ssd tlog + 5*2 storage and each 6 ssd stateless???

or

6 ssd = 1 ssd tlog + 5*2 storage and 2 stateless on tlog ssd ???

and we have 400g ram per machine

Stateless processes won’t actually use the disk, so you can put them wherever without impacting much. Putting them all on one SSD is fine, spreading it out is fine. They only store a 4KB file in the data directory, and emit a normal amount of logs.

6 stateless per host seems a bit much, so 2*stateless per TLog sounds fine.

Sadly FDB is not great when it comes to utilizing CPU cores (if you have many more processes, your disks might be over-utilized). This makes sense to a large degree because FDB is mostly IO-bound.

However, you can make better use of your memory. To be on the safe side, I would recommend to give each process at least 8GB of memory (which is the default). On your machines this will give you ~100 GB of unused memory (104 - but you probably want some memory reserved for the OS and operational tools you’re going to run on these machines). We were very successful by using our memory for caching. So you could give each storage process an additional 10GB of memory for caching which will, if the workload is skewed to some degree a great performance improvement.

To do so you can change the --memory option for all storage servers. To use the memory for the cache you will also need to set two knobs. Sadly you will need to set these options for each storage process (which might not be too bad if you auto-generate your foundationdb.conf file). But if your storages run on ports 4503-4513 (as an example) your conf might look a bit like this:

[fdbserver.4503]
memory = 18GiB
knob_page_cache_4k = 12884901888 # 12 GiB

(the way this math works is: 2GiB page cache is default, we added 10GiB memory, so we can add these to the 2GiB we already have. 12*2^30 = 12884901888 if I did the math right).

For our workloads with a similar config we have a cache-hit rate of ~99.99% which makes a big difference for us…

If your cache hit rate is similar, you might even be able to double the number of storage nodes per disk (making the average cache size 5GiB which should give you the same cache hit rate as the amount of data per process will be lower). This way you could improve on CPU utilization and write throughput (we found that fdb is kind of limited when it comes to utilizing disks for writes - a problem that redwood is hopefully going to solve).