We see performance problems when clearing large ranges in foundationdb3. We see many storage nodes get pegged at 100% cpu, and perf top shows they’re spending their time in
walMerge in sqlite. We’re running foundationdb3 on EBS.
A couple questions
- Is this improved in foundationdb6?
- Would it make sense expose a throttled clear range in the API? We’re imagining something like issuing a special clear range with a budget for how much work it can do per storage node. This would make sense in the directory layer, where clients would know that a directory was removed and the actual work of clearing the contents of the directory can be deferred.