From TigerBeetle team and Will Wilson’s talks I began to know Simulation Test and the benefit of it to deterministically test complex distributed system; After searching code in fdbserver/workloads I found it is hard to see where the invariants of those specific components of FoundationDB are verified, for example, each Resolver should follow the strict order of version to process requests from CommitProxies, it should do so even after it is recruited or the key range assigned to it has been changed. In a TLogServer there are so many variables to maintain cluster durable committed version and to make recovery correct, how the correctness of the implementation are tested and verified?
I know that Simulated Network and File have introduced lots of possible events, processes or machines faults have been injected based on random seed, so the workloads have been tested in
huge amounts of faults and interleaving of events but I can’t find the code showing how and where those invariants are verified. Are they somehow indirectly verified when running some higher level workload which requires whole transaction system working correctly(e.g. SerializabilityWorkload, CycleWorkload)? If I want to see TraceEvents to understand better how components work together to maintain invariants under specific case how should I write workloads – for example, to verify some invariants of Resolver, the TLogServer mentioned above?