Fair point. The idea of configuring so many transaction processes was to be able to scale as our customer base converts to this solution. More data?: Just up the storage servers. In my mind a tiered scaling approach would make changes manageable up to a point where scaling the cluster’s core infra would be a great problem to have. It is possible for us to scale from ~1’sTB to ~100’sTB of managed data demand within a year or two.
This does seem overly complicated of a scheme, now. I can always configure machine roles later if needed. Thanks for the input! I’ve outlined where I’m getting my info. If you have any input on additional resources to find best practices, I’d appreciate it.
Road map:
- FDB Fundamentals
a. Docs
b. Forum
c. https://www.youtube.com/watch?v=16uU_Aaxp9Y&list=PLbzoR-pLrL6q7uYN-94-p_-Q3hyAmpI7o - Select Cloud Infra and Config
- Backup and Restore to Blob Store/Disaster Recovery
a. Docs
b. Fdbbackup expire with blobstore returning error, not deleting from S3 - Automate and Secure Deployment
a. Infra as Code - Done
b. Networking Security and TLS - Tuning Servers
a. Forums - Updates and Maintenance
a. Docs