Hello @pedrolamarao!
Our application is an asynchronous key management service.
I am not sure what “asynchronous” means in your context.
The data set in this application is infinitesimal for modern standards.
Reading the rest of the paragraph, I can tell that is around megabytes of data.
Quoting the documentation:
memory
: Maximum resident memory used by the process. The default value is 8GiB. When specified without a unit, MiB is assumed. Setting to 0 means unlimited. This parameter does not change the memory allocation of the program. Rather, it sets a hard limit beyond which the process will kill itself and be restarted. The default value of 8GiB is double the intended memory usage in the default configuration (providing an emergency buffer to deal with memory leaks or similar problems). It is not recommended to decrease the value of this parameter below its default value. It may be increased if you wish to allocate a very large amount of storage engine memory or cache. In particular, when the storage-memory
or cache-memory
parameters are increased, the memory
parameter should be increased by an equal amount.
Configuration — FoundationDB 7.1
If a FoundationDB process reach 8GB of memory usage, it will be killed by FoundationDB, and the associated transactions aborted. The 8GB value does not mean that FoundationDB will allocate at startup time 8GB.
Screenshot of htop
filtered on foundationdb that is at rest:
Less than 1GB of RAM are used by ALL process on that machine.
Here is the fdbcli
status:
$ fdbcli
Using cluster file `/etc/foundationdb/fdb.cluster'.
The database is available.
Welcome to the fdbcli. For help, type `help'.
fdb> status
Using cluster file `/etc/foundationdb/fdb.cluster'.
Configuration:
Redundancy mode - single
Storage engine - memory-2
Coordinators - 1
Usable Regions - 1
Cluster:
FoundationDB processes - 1
Zones - 1
Machines - 1
Memory availability - 8.0 GB per process on machine with least available
Fault Tolerance - 0 machines
Server time - 07/08/22 12:46:11
Data:
Replication health - Healthy
Moving data - 0.000 GB
Sum of key-value sizes - 21 MB
Disk space used - 245 MB
Operating space:
Storage server - 0.9 GB free on most full server
Log server - 1683.7 GB free on most full server
Workload:
Read rate - 21 Hz
Write rate - 0 Hz
Transactions started - 9 Hz
Transactions committed - 0 Hz
Conflict rate - 0 Hz
Backup and DR:
Running backups - 0
Running DRs - 0
Client time: 07/08/22 12:46:11
fdb>
Canonical is working on dqlite that can help your use case. Tho, after looking around other distributed, safe, secure, and maintained resilient databases, it appears that FoundationDB, whether it is for a single box, large deployment, web-scale deployment and whether you use existing layers, or need only a mere key-value mapping, etc… FoundationDB is a better choice for you, and your employer. The only problem is that there is still no buzz around FoundationDB… It may look like a big “investment”, it is not.
Whether it is FoundationDB or something else, you should benchmark the software (ping @osamarin who was xp with that).
I am very interested in small scale deployment, many people around me think FoundationDB is overkill, and prefer REDIS or PostgreSQL for dubious reasons…
It appears, you need to test the following:
- Backup and restore: given the size of the data you are working with, my guess is that you can set up your own backup strategy (record the restore process wall-clock time);
- Cluster resilience: make sure the data is still readable when one process or machine is misbehaving;
- [edit] setup monitoring !
Please share with us any findings