Is it possible to enable `perpetual_storage_wiggle` at the 'shard' level instead of the 'process' level?

I can’t answer the wiggle part, but we were facing a similar issue, (workload that continously inserts new data, but removes old data at the tail) that turned out to be related how redwood handles page slack on page splits.

This is true, but when you wipe the data, the deleted pages should be immediately available to store new data, even though they are not returned to the OS.

You can validate this by opening up the trace logs from a storage server and look for StorageMetrics events, where KvstoreBytesFree will be smaller than KvstoreBytesAvailable. I think this is also exposed in status json for the processes.

It is easiest to validate when there is a log and and a storage role on the same disk, the size avilable for the log will reflect the OS free space, but the size available for the storage will include the deleted (now free) pages.

My understanding is that if you have 100GB of old data, add 100GB of new data, then your OS will report 200GB used data, when you delete the old set, this won’t change. But when you insert the next batch, it should be able to use the free pages, and disk usage should not grow too much beyond 200GB