FoundationDB with HDD

Is it not possible to use FoundationDB with HDD? Or is it just not recommended? I cannot use it in ‘memory’ mode because my DB cannot fit in memory.

There’s nothing about the SSD storage engine that won’t work on a HDD. It’ll just be slow, as there’s a number of trade offs that were taken to favor SSDs instead of HDDs. Consider it as “ssd-optimized” and not “ssd-required”.

The SSD engine was designed for SSDs with no effort made to have it run well on HDDs. I don’t think we have ever tested the engine on HDDs, but we do expect the performance to be pretty poor. We list SSDs as required for the SSD engine in our system requirements for this reason, and you can see in this section that we also caution about the performance and possibly even the availability of a system running the SSD storage engine on spinning disks or network attached storage.

That said, the software should at least run on HDDs, as Alex said. If you do try it, please let us know how it works out.

For the usecase we have, SSD storage is not becoming viable from a cost perspective. Hence we are requiring to store on HDD. What are the optimizations that can effect the performance on HDD? Any pointers to design decisions? to specific parts of code? Do you accept any PRs to optimize for HDD too if we can make them if not right away over time?

Also, is here something like this on the roadmap: have highly used data on SSD and cold under used data on HDD and dynamically shift data between the tiers based on the usage pattern?

Just did a quick single machine single process 10 write transaction test of writing 15-20 byte key with 8KB values:

  1. SSD: 7500 writes/sec
  2. HDD: 350 writes/sec

SSD to HDD is 95% performance drop, this is abysmal performance.

1 Like

If you can make PRs to improve the ssd storage engine on HDDs, without compromising the ssd engine’s performance on SSDs, then that sounds great.

I’m not aware of any knobs that you could change to get better ssd engine performance on HDDs though. Random access is far worse on HDD than SSD, which can cause desires for better page reading/writing locality. I suspect larger page sizes would be nicer for HDDs as well. The most HDD-friendly seeming B-Tree design that I recall seeing is the Bε-tree, on which TokuTek’s Fractal Tree was based.

I’d be happy to see FoundationDB one day grow an hdd storage engine that’s friendly to spinning disk, but there’s no current work happening in that direction. Similarly, there’s no current work happening for a hybrid SSD/HDD storage engine.

The fact that you’re doing a single machine test, and thus probably single drive test, makes this not very representative of how a cluster would perform. The transaction logs should still be pretty HDD friendly (as they just append to a linear file, mostly), but if you have it running on the same HDD as a storage server, then the excessive seeking done by the storage server will slaughter transaction log performance. As only the transaction log is involved in commits, I don’t think you should be seeing that much of a throughput penalty in a multi-machine cluster.

On SSD’s, we’ve seen something like a 1 transaction log : 7 storage server ratio being around what’s needed for storage servers to be able to keep up with the transaction log. I’d imagine the major difference that you should see is that the ratio would become larger for HDD’s, due to decreased storage server throughput.

To do this test,

  1. Run two fdbserver instances with --datadir on different HDD drives.
  2. Set one of them to be --class=transaction, and the other to be --class=storage
  3. configure new single ssd
  4. Run your benchmark